Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:27:20 +08:00
commit b35e761c82
11 changed files with 3059 additions and 0 deletions

View File

@@ -0,0 +1,11 @@
{
"name": "flox",
"description": "Flox development environment and deployment plugin. Includes expert guidance for package management, services, builds and package distribution, containerization, environment composition and layering. Works with the Flox MCP server for enhanced functionality.",
"version": "1.0.0",
"author": {
"name": "Flox"
},
"skills": [
"./skills"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# flox
Flox development environment and deployment plugin. Includes expert guidance for package management, services, builds and package distribution, containerization, environment composition and layering. Works with the Flox MCP server for enhanced functionality.

73
plugin.lock.json Normal file
View File

@@ -0,0 +1,73 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:flox/flox-agentic:flox-plugin",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "4043f54aaaa5d6748d9c94dfcc9e7057b9194643",
"treeHash": "132a12f226cf6cbfc190fed94055bfb8ae9522b9984839f470cd47e5877377d4",
"generatedAt": "2025-11-28T10:16:55.153081Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "flox",
"description": "Flox development environment and deployment plugin. Includes expert guidance for package management, services, builds and package distribution, containerization, environment composition and layering. Works with the Flox MCP server for enhanced functionality.",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "62c975801bb38b25754d3870ac254ffe71188b37f0eccdb6cd1e5a7c481bd16c"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "4f08266fc4c78c6e99b4ac26323190afd40570ab9f7030e1256c7820f71c9782"
},
{
"path": "skills/README.md",
"sha256": "e589b399590fe661359bcc89324996ab79ac53053d287f80ce979d15f49b9db4"
},
{
"path": "skills/flox-sharing/SKILL.md",
"sha256": "67e71414026996e635acad9cd2829a311778828f622220711fccce05e5274a72"
},
{
"path": "skills/flox-services/SKILL.md",
"sha256": "a6608db65d9c480a2d0d9fd461367a6a358e7673b03e001dbbe7442726b1acab"
},
{
"path": "skills/flox-environments/SKILL.md",
"sha256": "7ece3329c2acea714efff04b9fef8dbd7fe98505b254b3e8e96a4fb4d962343e"
},
{
"path": "skills/flox-builds/SKILL.md",
"sha256": "0b47de45963b8507b30a421ede8ba305fbb1d773f70b7490ee275adf5816880f"
},
{
"path": "skills/flox-publish/SKILL.md",
"sha256": "33d3cc7cfb64337c54d3484992cac5b5a86efc78edef3d6e3a04e05880e72cdd"
},
{
"path": "skills/flox-cuda/SKILL.md",
"sha256": "3ed977eb97be9d7de2f9eaeb7ae552946f5fafb5a148810780ef4f60126c0566"
},
{
"path": "skills/flox-containers/SKILL.md",
"sha256": "d5cf632bbd4e89b339d5d9970f3fdc12a41e552f63c81dee3f1fd90f673cc920"
}
],
"dirSha256": "132a12f226cf6cbfc190fed94055bfb8ae9522b9984839f470cd47e5877377d4"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

4
skills/README.md Normal file
View File

@@ -0,0 +1,4 @@
# Installation
/plugin marketplace add owner/repo
/plugin install flox@flox

456
skills/flox-builds/SKILL.md Normal file
View File

@@ -0,0 +1,456 @@
---
name: flox-builds
description: Building and packaging applications with Flox. Use for manifest builds, Nix expression builds, sandbox modes, multi-stage builds, and packaging assets.
---
# Flox Build System Guide
## Build System Overview
Flox supports two build modes, each with its own strengths:
**Manifest builds** enable you to define your build steps in your manifest and reuse your existing build scripts and toolchains. Flox manifests are declarative artifacts, expressed in TOML.
Manifest builds:
- Make it easy to get started, requiring few if any changes to your existing workflows
- Can run inside a sandbox (using `sandbox = "pure"`) for reproducible builds
- Are best for getting going fast with existing projects
**Nix expression builds** guarantee build-time reproducibility because they're both isolated and purely functional. Their learning curve is steeper because they require proficiency with the Nix language.
Nix expression builds:
- Are isolated by default. The Nix sandbox seals the build off from the host system, so no state leak ins
- Are functional. A Nix build is defined as a pure function of its declared inputs
You can mix both approaches in the same project, but package names must be unique.
## Core Commands
```bash
flox build # Build all targets
flox build app docs # Build specific targets
flox build -d /path/to/project # Build in another directory
flox build -v # Verbose output
flox build .#hello # Build specific Nix expression
```
## Development vs Runtime: The Two-Environment Pattern
A common workflow involves **two separate environments**:
### Development Environment (Build-Time)
Contains source code, build tools, and build definitions:
```toml
# project-dev/.flox/env/manifest.toml (in git with source code)
[install]
gcc.pkg-path = "gcc13"
make.pkg-path = "make"
python.pkg-path = "python311Full"
uv.pkg-path = "uv"
[build.myapp]
command = '''
make build
mkdir -p $out/bin
cp build/myapp $out/bin/
'''
version = "1.0.0"
```
**Workflow:**
```bash
cd project-dev
flox activate
flox build myapp
flox publish -o myorg myapp
```
### Runtime Environment (Consume-Time)
Contains only the published package and runtime dependencies:
```toml
# project-runtime/.flox/env/manifest.toml (can push to FloxHub)
[install]
myapp.pkg-path = "myorg/myapp" # The published package
```
**Workflow:**
```bash
cd project-runtime
flox init
flox install myorg/myapp
flox push # Share runtime environment without source code
```
**Why separate environments?**
- Development environment: Heavy (build tools, source code, dev dependencies)
- Runtime environment: Lightweight (only published package and runtime needs)
- Security: Runtime environments don't expose source code
- Clarity: Clear separation between building and consuming
- Rollback: Can rollback the live generation of a runtime environment without affecting the development environment
**Note**: You can also install published packages into existing environments (other projects, production environments, etc.), not just dedicated runtime environments.
## Manifest Builds
Flox treats a **manifest build** as a short, deterministic Bash script that runs inside an activated environment and copies its deliverables into `$out`. Anything copied there becomes a first-class, versioned package that can later be published and installed like any other catalog artifact.
### Critical insights from real-world packaging:
- **Build hooks don't run**: `[hook]` scripts DO NOT execute during `flox build` - only during interactive `flox activate`
- **Guard env vars**: Always use `${FLOX_ENV_CACHE:-}` with default fallback in hooks to avoid build failures
- **Wrapper scripts pattern**: Create launcher scripts in `$out/bin/` that set up runtime environment:
```bash
cat > "$out/bin/myapp" << 'EOF'
#!/usr/bin/env bash
APP_ROOT="$(dirname "$(dirname "$(readlink -f "$0")")")"
export PYTHONPATH="$APP_ROOT/share/myapp:$PYTHONPATH"
exec python3 "$APP_ROOT/share/myapp/main.py" "$@"
EOF
chmod +x "$out/bin/myapp"
```
- **User config pattern**: Default to `~/.myapp/` for user configs, not `$FLOX_ENV_CACHE` (packages are immutable)
- **Model/data directories**: Create user directories at runtime, not build time:
```bash
mkdir -p "${MYAPP_DIR:-$HOME/.myapp}/models"
```
- **Python package strategy**: Don't bundle Python deps - include `requirements.txt` and setup script:
```bash
# In build, create setup script:
cat > "$out/bin/myapp-setup" << 'EOF'
venv="${VENV:-$HOME/.myapp/venv}"
uv venv "$venv" --python python3
uv pip install --python "$venv/bin/python" -r "$APP_ROOT/share/myapp/requirements.txt"
EOF
```
- **Dual-environment workflow**: Use one environment for building (`project-dev/`), another for consuming (`project-runtime/`). See "Development vs Runtime: The Two-Environment Pattern" section above for details.
### Build Definition Syntax
```toml
[build.<name>]
command = ''' # required Bash, multiline string
<your build steps> # e.g. cargo build, npm run build
mkdir -p $out/bin
cp path/to/artifact $out/bin/<name>
'''
version = "1.2.3" # optional
description = "one-line summary" # optional
sandbox = "pure" | "off" # default: off
runtime-packages = [ "id1", "id2" ] # optional
```
**One table per package.** Multiple `[build.*]` tables let you publish, for example, a stripped release binary and a debug build from the same sources.
**Bash only.** The script executes under `set -euo pipefail`. If you need zsh or fish features, invoke them explicitly inside the script.
**Environment parity.** Before your script runs, Flox performs the equivalent of `flox activate` — so every tool listed in `[install]` is on PATH.
**Package groups and builds.** Only packages in the `toplevel` group (default) are available during builds. Packages with explicit `pkg-group` settings won't be accessible in build commands unless also installed to `toplevel`.
**Referencing other builds.** `${other}` expands to the `$out` of `[build.other]` and forces that build to run first, enabling multi-stage flows (e.g. vendoring → compilation).
## Purity and Sandbox Control
| sandbox value | Filesystem scope | Network | Typical use-case |
|---------------|------------------|---------|------------------|
| `"off"` (default) | Project working tree; complete host FS | allowed | Fast, iterative dev builds |
| `"pure"` | Git-tracked files only, copied to tmp | Linux: blocked<br>macOS: allowed | Reproducible, host-agnostic packages |
Pure mode highlights undeclared inputs early and is mandatory for builds intended for CI/CD publication. When a pure build needs pre-fetched artifacts (e.g. language modules) use a two-stage pattern:
```toml
[build.deps]
command = '''go mod vendor -o $out/etc/vendor'''
sandbox = "off"
[build.app]
command = '''
cp -r ${deps}/etc/vendor ./vendor
go build ./...
mkdir -p $out/bin
cp app $out/bin/
'''
sandbox = "pure"
```
## $out Layout and Filesystem Hierarchy
Only files placed under `$out` survive. Follow FHS conventions:
| Path | Purpose |
|------|---------|
| `$out/bin` / `$out/sbin` | CLI and daemon binaries (must be `chmod +x`) |
| `$out/lib`, `$out/libexec` | Shared libraries, helper programs |
| `$out/share/man` | Man pages (gzip them) |
| `$out/etc` | Configuration shipped with the package |
Scripts or binaries stored elsewhere will not end up on callers' paths.
## Running Manifest Builds
```bash
# Build every target in the manifest
flox build
# Build a subset
flox build app docs
# Build a manifest in another directory
flox build -d /path/to/project
```
Results appear as immutable symlinks: `./result-<name>` → `/nix/store/...-<name>-<version>`.
To execute a freshly built binary: `./result-app/bin/app`.
## Multi-Stage Examples
### Rust release binary plus source tar
```toml
[build.bin]
command = '''
cargo build --release
mkdir -p $out/bin
cp target/release/myproject $out/bin/
'''
version = "0.9.0"
[build.src]
command = '''
git archive --format=tar HEAD | gzip > $out/myproject-${bin.version}.tar.gz
'''
sandbox = "pure"
```
`${bin.version}` resolves because both builds share the same manifest.
### Go with vendored dependencies
```toml
[build.vendor]
command = '''
go mod vendor
mkdir -p $out/vendor
cp -r vendor/* $out/vendor/
'''
sandbox = "off"
[build.app]
command = '''
cp -r ${vendor}/vendor ./
go build -mod=vendor -o $out/bin/myapp
'''
sandbox = "pure"
```
## Trimming Runtime Dependencies
By default, every package in the `toplevel` install-group becomes a runtime dependency of your build's closure—even if it was only needed at compile time.
Declare a minimal list instead:
```toml
[install]
clang.pkg-path = "clang"
pytest.pkg-path = "pytest"
[build.cli]
command = '''
make
mv build/cli $out/bin/
'''
runtime-packages = [ "clang" ] # exclude pytest from runtime closure
```
Smaller closures copy faster and occupy less disk when installed on users' systems.
## Version and Description Metadata
Flox surfaces these fields in `flox search`, `flox show`, and during publication.
```toml
[build.mytool]
version.command = "git describe --tags"
description = "High-performance log shipper"
```
Alternative forms:
```toml
version = "1.4.2" # static string
version.file = "VERSION.txt" # read at build time
```
## Cross-Platform Considerations for Manifest Builds
`flox build` targets the host's systems triple. To ship binaries for additional platforms you must trigger the build on machines (or CI runners) of those architectures:
```
linux-x86_64 → build → publish
darwin-aarch64 → build → publish
```
The manifest can remain identical across hosts.
## Beyond Code — Packaging Assets
Any artifact that can be copied into `$out` can be versioned and installed:
### Nginx baseline config
```toml
[build.nginx_cfg]
command = '''mkdir -p $out/etc && cp nginx.conf $out/etc/'''
```
### Organization-wide .proto schema bundle
```toml
[build.proto]
command = '''
mkdir -p $out/share/proto
cp proto/**/*.proto $out/share/proto/
'''
```
Teams install these packages and reference them via `$FLOX_ENV/etc/nginx.conf` or `$FLOX_ENV/share/proto`.
## Nix Expression Builds
You can write a Nix expression instead of (or in addition to) defining a manifest build.
Put `*.nix` build files in `.flox/pkgs/` for Nix expression builds. Git add all files before building.
### File Naming
- `hello.nix` → package named `hello`
- `hello/default.nix` → package named `hello`
### Common Patterns
**Shell Script**
```nix
{writeShellApplication, curl}:
writeShellApplication {
name = "my-ip";
runtimeInputs = [ curl ];
text = ''curl icanhazip.com'';
}
```
**Your Project**
```nix
{ rustPlatform, lib }:
rustPlatform.buildRustPackage {
pname = "my-app";
version = "0.1.0";
src = ../../.;
cargoLock.lockFile = "${src}/Cargo.lock";
}
```
**Update Version**
```nix
{ hello, fetchurl }:
hello.overrideAttrs (finalAttrs: _: {
version = "2.12.2";
src = fetchurl {
url = "mirror://gnu/hello/hello-${finalAttrs.version}.tar.gz";
hash = "sha256-WpqZbcKSzCTc9BHO6H6S9qrluNE72caBm0x6nc4IGKs=";
};
})
```
**Apply Patches**
```nix
{ hello }:
hello.overrideAttrs (oldAttrs: {
patches = (oldAttrs.patches or []) ++ [ ./my.patch ];
})
```
### Hash Generation
1. Use `hash = "";`
2. Run `flox build`
3. Copy hash from error message
### Commands
- `flox build` - build all
- `flox build .#hello` - build specific
- `git add .flox/pkgs/*` - track files
## Language-Specific Build Examples
### Python Application
```toml
[build.myapp]
command = '''
mkdir -p $out/bin $out/share/myapp
# Copy application code
cp -r src/* $out/share/myapp/
cp requirements.txt $out/share/myapp/
# Create wrapper script
cat > $out/bin/myapp << 'EOF'
#!/usr/bin/env bash
APP_ROOT="$(dirname "$(dirname "$(readlink -f "$0")")")"
export PYTHONPATH="$APP_ROOT/share/myapp:$PYTHONPATH"
exec python3 "$APP_ROOT/share/myapp/main.py" "$@"
EOF
chmod +x $out/bin/myapp
'''
version = "1.0.0"
```
### Node.js Application
```toml
[build.webapp]
command = '''
npm ci
npm run build
mkdir -p $out/share/webapp
cp -r dist/* $out/share/webapp/
cp package.json package-lock.json $out/share/webapp/
cd $out/share/webapp && npm ci --production
'''
version = "1.0.0"
```
### Rust Binary
```toml
[build.cli]
command = '''
cargo build --release
mkdir -p $out/bin
cp target/release/mycli $out/bin/
'''
version.command = "cargo metadata --no-deps --format-version 1 | jq -r '.packages[0].version'"
```
## Debugging Build Issues
### Common Problems
**Build hooks don't run**: `[hook]` scripts DO NOT execute during `flox build`
**Package groups**: Only `toplevel` group packages available during builds
**Network access**: Pure builds can't access network on Linux
### Debugging Steps
1. Check build output: `flox build -v`
2. Inspect result: `ls -la result-<name>/`
3. Test binary: `./result-<name>/bin/<name>`
4. Check dependencies: `nix-store -q --references result-<name>`
## Related Skills
- **flox-environments** - Setting up development and runtime environments
- **flox-publish** - Publishing built packages to catalogs, understanding the dev→publish→runtime workflow
- **flox-containers** - Building container images

View File

@@ -0,0 +1,484 @@
---
name: flox-containers
description: Containerizing Flox environments with Docker/Podman. Use for creating container images, OCI exports, multi-stage builds, and deployment workflows.
---
# Flox Containerization Guide
## Core Commands
```bash
flox containerize # Export to default tar file
flox containerize -f ./mycontainer.tar # Export to specific file
flox containerize --runtime docker # Export directly to Docker
flox containerize --runtime podman # Export directly to Podman
flox containerize -f - | docker load # Pipe to Docker
flox containerize --tag v1.0 # Tag container image
flox containerize -r owner/env # Containerize remote environment
```
## Basic Usage
### Export to File
```bash
# Export to file
flox containerize -f ./mycontainer.tar
docker load -i ./mycontainer.tar
# Or use default filename: {name}-container.tar
flox containerize
docker load -i myenv-container.tar
```
### Export Directly to Runtime
```bash
# Auto-detects docker or podman
flox containerize --runtime docker
# Explicit runtime selection
flox containerize --runtime podman
```
### Pipe to Stdout
```bash
# Pipe directly to Docker
flox containerize -f - | docker load
# With tagging
flox containerize --tag v1.0 -f - | docker load
```
## How Containers Behave
**Containers activate the Flox environment on startup** (like `flox activate`):
- **Interactive**: `docker run -it <image>` → Bash shell with environment activated
- **Non-interactive**: `docker run <image> <cmd>` → Runs command with environment activated (like `flox activate -- <cmd>`)
- All packages, variables, and hooks are available inside the container
**Note**: Flox sets an entrypoint that activates the environment, then runs `cmd` inside that activation.
## Command Options
```bash
flox containerize
[-f <file>] # Output file (- for stdout); defaults to {name}-container.tar
[--runtime <runtime>] # docker/podman (auto-detects if not specified)
[--tag <tag>] # Container tag (e.g., v1.0, latest)
[-d <path>] # Path to .flox/ directory
[-r <owner/name>] # Remote environment from FloxHub
```
## Manifest Configuration
Configure container in `[containerize.config]` (experimental):
```toml
[containerize.config]
user = "appuser" # Username or uid:gid format
exposed-ports = ["8080/tcp"] # Ports to expose (tcp/udp/default:tcp)
cmd = ["python", "app.py"] # Command to run (receives activated env)
volumes = ["/data", "/config"] # Mount points for persistent data
working-dir = "/app" # Working directory
labels = { version = "1.0" } # Arbitrary metadata
stop-signal = "SIGTERM" # Signal to stop container
```
### Configuration Options Explained
**user**: Run container as specific user
- Username: `user = "appuser"`
- UID:GID: `user = "1000:1000"`
**exposed-ports**: Network ports to expose
- TCP: `["8080/tcp"]`
- UDP: `["8125/udp"]`
- Default protocol is tcp: `["8080"]` = `["8080/tcp"]`
**cmd**: Command to run in container
- Array form: `cmd = ["python", "app.py"]`
- Empty for service-based: `cmd = []`
**volumes**: Mount points for persistent data
- List paths: `volumes = ["/data", "/config", "/logs"]`
**working-dir**: Initial working directory
- Absolute path: `working-dir = "/app"`
**labels**: Arbitrary metadata
- Key-value pairs: `labels = { version = "1.0", env = "production" }`
**stop-signal**: Signal to stop container
- Common: `"SIGTERM"`, `"SIGINT"`, `"SIGKILL"`
## Complete Workflow Examples
### Flask Web Application
```bash
# Create environment
flox init
flox install python311 flask
# Configure for container
cat >> .flox/env/manifest.toml << 'EOF'
[containerize.config]
exposed-ports = ["5000/tcp"]
cmd = ["python", "-m", "flask", "run", "--host=0.0.0.0"]
working-dir = "/app"
user = "flask"
EOF
# Build and run
flox containerize -f - | docker load
docker run -p 5000:5000 -v $(pwd):/app <container-id>
```
### Node.js Application
```bash
flox init
flox install nodejs
cat >> .flox/env/manifest.toml << 'EOF'
[containerize.config]
exposed-ports = ["3000/tcp"]
cmd = ["npm", "start"]
working-dir = "/app"
EOF
flox containerize --tag myapp:latest --runtime docker
docker run -p 3000:3000 -v $(pwd):/app myapp:latest
```
### Database Container
```bash
flox init
flox install postgresql
# Set up service in manifest
flox edit
# Add service and container config
cat >> .flox/env/manifest.toml << 'EOF'
[services.postgres]
command = '''
mkdir -p /data/postgres
if [ ! -d "/data/postgres/pgdata" ]; then
initdb -D /data/postgres/pgdata
fi
exec postgres -D /data/postgres/pgdata -h 0.0.0.0
'''
is-daemon = true
[containerize.config]
exposed-ports = ["5432/tcp"]
volumes = ["/data"]
cmd = [] # Service starts automatically
EOF
flox containerize -f - | docker load
docker run -p 5432:5432 -v pgdata:/data <container-id>
```
## Common Patterns
### Service Containers
Services start automatically when cmd is empty:
```toml
[services.web]
command = "python -m http.server 8000"
[containerize.config]
exposed-ports = ["8000/tcp"]
cmd = [] # Service starts automatically
```
### Multi-Stage Pattern
Build in one environment, run in another:
```bash
# Build environment with all dev tools
cd build-env
flox activate -- flox build myapp
# Runtime environment with minimal deps
cd ../runtime-env
flox install myapp
flox containerize --tag production -f - | docker load
# Run
docker run production
```
### Remote Environment Containers
Containerize shared team environments:
```bash
# Containerize remote environment
flox containerize -r team/python-ml --tag latest --runtime docker
# Run it
docker run -it team-python-ml:latest
```
### Multi-Service Container
```toml
[services.db]
command = '''exec postgres -D "$FLOX_ENV_CACHE/postgres"'''
is-daemon = true
[services.cache]
command = '''exec redis-server'''
is-daemon = true
[services.api]
command = '''exec python -m uvicorn main:app --host 0.0.0.0'''
[containerize.config]
exposed-ports = ["8000/tcp", "5432/tcp", "6379/tcp"]
cmd = [] # All services start automatically
```
## Platform-Specific Notes
### macOS
- Requires docker/podman runtime (uses proxy container for builds)
- May prompt for file sharing permissions
- Creates `flox-nix` volume for caching
- Safe to remove when not building: `docker volume rm flox-nix`
### Linux
- Direct image creation without proxy
- No intermediate volumes needed
- Native container support
## Advanced Use Cases
### Custom Entrypoint with Wrapper Script
```toml
[build.entrypoint]
command = '''
cat > $out/bin/entrypoint.sh << 'EOF'
#!/usr/bin/env bash
set -e
# Custom initialization
echo "Initializing application..."
setup_app
# Run whatever command was passed
exec "$@"
EOF
chmod +x $out/bin/entrypoint.sh
'''
[containerize.config]
cmd = ["entrypoint.sh", "python", "app.py"]
```
### Health Check Support
```toml
[containerize.config]
cmd = ["python", "app.py"]
labels = {
"healthcheck" = "curl -f http://localhost:8000/health || exit 1"
}
```
Then in Docker:
```bash
docker run --health-cmd="curl -f http://localhost:8000/health || exit 1" \
--health-interval=30s \
myimage
```
### Multi-Architecture Builds
Build for different architectures:
```bash
# On x86_64 Linux
flox containerize --tag myapp:amd64 --runtime docker
# On ARM64 (aarch64) Linux
flox containerize --tag myapp:arm64 --runtime docker
# Create manifest
docker manifest create myapp:latest \
myapp:amd64 \
myapp:arm64
```
### Minimal Container Size
Create minimal runtime environment:
```toml
[install]
# Only runtime dependencies
python.pkg-path = "python311"
# No dev tools, no build tools
[build.app]
command = '''
# Build in build environment
python -m pip install --target=$out/lib/python -r requirements.txt
cp -r src $out/lib/python/
'''
runtime-packages = ["python"]
[containerize.config]
cmd = ["python", "-m", "myapp"]
```
## Container Registry Workflows
### Push to Registry
```bash
# Build container
flox containerize --tag myapp:v1.0 --runtime docker
# Tag for registry
docker tag myapp:v1.0 registry.company.com/myapp:v1.0
# Push
docker push registry.company.com/myapp:v1.0
```
### GitLab CI/CD
```yaml
containerize:
stage: build
script:
- flox containerize --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG --runtime docker
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
```
### GitHub Actions
```yaml
- name: Build container
run: |
flox containerize --tag ghcr.io/${{ github.repository }}:${{ github.sha }} --runtime docker
- name: Push to GHCR
run: |
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
docker push ghcr.io/${{ github.repository }}:${{ github.sha }}
```
## Kubernetes Deployment
### Basic Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: registry.company.com/myapp:v1.0
ports:
- containerPort: 8000
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: myapp-data
```
### Service Definition
```yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8000
type: LoadBalancer
```
## Debugging Container Issues
### Inspect Container
```bash
# Run interactively
docker run -it --entrypoint /bin/bash <image-id>
# Check environment
docker run <image-id> env
# Check what's in the image
docker run <image-id> ls -la /
```
### View Container Logs
```bash
# Follow logs
docker logs -f <container-id>
# Last 100 lines
docker logs --tail 100 <container-id>
```
### Execute Commands in Running Container
```bash
# Get a shell
docker exec -it <container-id> /bin/bash
# Run specific command
docker exec <container-id> flox list
```
## Best Practices
1. **Use specific tags**: Avoid `latest`, use semantic versioning
2. **Minimize layers**: Combine related operations in manifests
3. **Use .dockerignore equivalent**: Only include necessary files in build context
4. **Health checks**: Implement health check endpoints for services
5. **Security**: Run as non-root user when possible
6. **Volumes**: Use volumes for persistent data, not container filesystem
7. **Environment variables**: Make configuration overridable via env vars
8. **Logging**: Log to stdout/stderr, not files
## Related Skills
- **flox-environments** - Creating environments to containerize
- **flox-services** - Running services in containers
- **flox-builds** - Building artifacts before containerizing
- **flox-sharing** - Containerizing remote environments

514
skills/flox-cuda/SKILL.md Normal file
View File

@@ -0,0 +1,514 @@
---
name: flox-cuda
description: CUDA and GPU development with Flox. Use for NVIDIA CUDA setup, GPU computing, deep learning frameworks, cuDNN, and cross-platform GPU/CPU development.
---
# Flox CUDA Development Guide
## Prerequisites & Authentication
- Sign up for early access at https://flox.dev
- Authenticate with `flox auth login`
- **Linux-only**: CUDA packages only work on `["aarch64-linux", "x86_64-linux"]`
- All CUDA packages are prefixed with `flox-cuda/` in the catalog
- **No macOS support**: Use Metal alternatives on Darwin
## Core Commands
```bash
# Search for CUDA packages
flox search cudatoolkit --all | grep flox-cuda
flox search nvcc --all | grep 12_8
# Show available versions
flox show flox-cuda/cudaPackages.cudatoolkit
# Install CUDA packages
flox install flox-cuda/cudaPackages_12_8.cuda_nvcc
flox install flox-cuda/cudaPackages.cuda_cudart
# Verify installation
nvcc --version
nvidia-smi
```
## Package Discovery
```bash
# Search for CUDA toolkit
flox search cudatoolkit --all | grep flox-cuda
# Search for specific versions
flox search nvcc --all | grep 12_8
# Show all available versions
flox show flox-cuda/cudaPackages.cudatoolkit
# Search for CUDA libraries
flox search libcublas --all | grep flox-cuda
flox search cudnn --all | grep flox-cuda
```
## Essential CUDA Packages
| Package Pattern | Purpose | Example |
|-----------------|---------|---------|
| `cudaPackages_X_Y.cudatoolkit` | Main CUDA Toolkit | `cudaPackages_12_8.cudatoolkit` |
| `cudaPackages_X_Y.cuda_nvcc` | NVIDIA C++ Compiler | `cudaPackages_12_8.cuda_nvcc` |
| `cudaPackages.cuda_cudart` | CUDA Runtime API | `cuda_cudart` |
| `cudaPackages_X_Y.libcublas` | Linear algebra | `cudaPackages_12_8.libcublas` |
| `cudaPackages_X_Y.libcufft` | Fast Fourier Transform | `cudaPackages_12_8.libcufft` |
| `cudaPackages_X_Y.libcurand` | Random number generation | `cudaPackages_12_8.libcurand` |
| `cudaPackages_X_Y.cudnn_9_11` | Deep neural networks | `cudaPackages_12_8.cudnn_9_11` |
| `cudaPackages_X_Y.nccl` | Multi-GPU communication | `cudaPackages_12_8.nccl` |
## Critical: Conflict Resolution
**CUDA packages have LICENSE file conflicts requiring explicit priorities:**
```toml
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_12_8.cuda_nvcc"
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
cuda_nvcc.priority = 1 # Highest priority
cuda_cudart.pkg-path = "flox-cuda/cudaPackages.cuda_cudart"
cuda_cudart.systems = ["aarch64-linux", "x86_64-linux"]
cuda_cudart.priority = 2
cudatoolkit.pkg-path = "flox-cuda/cudaPackages_12_8.cudatoolkit"
cudatoolkit.systems = ["aarch64-linux", "x86_64-linux"]
cudatoolkit.priority = 3 # Lower for LICENSE conflicts
gcc.pkg-path = "gcc"
gcc-unwrapped.pkg-path = "gcc-unwrapped" # For libstdc++
gcc-unwrapped.priority = 5
```
## CUDA Version Selection
### CUDA 12.x (Current)
```toml
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_12_8.cuda_nvcc"
cuda_nvcc.priority = 1
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
cudatoolkit.pkg-path = "flox-cuda/cudaPackages_12_8.cudatoolkit"
cudatoolkit.priority = 3
cudatoolkit.systems = ["aarch64-linux", "x86_64-linux"]
```
### CUDA 11.x (Legacy Support)
```toml
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_11_8.cuda_nvcc"
cuda_nvcc.priority = 1
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
cudatoolkit.pkg-path = "flox-cuda/cudaPackages_11_8.cudatoolkit"
cudatoolkit.priority = 3
cudatoolkit.systems = ["aarch64-linux", "x86_64-linux"]
```
## Cross-Platform GPU Development
Dual CUDA/CPU packages for portability (Linux gets CUDA, macOS gets CPU fallback):
```toml
[install]
## CUDA packages (Linux only)
cuda-pytorch.pkg-path = "flox-cuda/python3Packages.torch"
cuda-pytorch.systems = ["x86_64-linux", "aarch64-linux"]
cuda-pytorch.priority = 1
## Non-CUDA packages (macOS + Linux fallback)
pytorch.pkg-path = "python313Packages.pytorch"
pytorch.systems = ["x86_64-darwin", "aarch64-darwin"]
pytorch.priority = 6 # Lower priority
```
## GPU Detection Pattern
**Dynamic CPU/GPU package installation in hooks:**
```bash
setup_gpu_packages() {
venv="$FLOX_ENV_CACHE/venv"
if [ ! -f "$FLOX_ENV_CACHE/.deps_installed" ]; then
if lspci 2>/dev/null | grep -E 'NVIDIA|AMD' > /dev/null; then
echo "GPU detected, installing CUDA packages"
uv pip install --python "$venv/bin/python" \
torch torchvision --index-url https://download.pytorch.org/whl/cu129
else
echo "No GPU detected, installing CPU packages"
uv pip install --python "$venv/bin/python" \
torch torchvision --index-url https://download.pytorch.org/whl/cpu
fi
touch "$FLOX_ENV_CACHE/.deps_installed"
fi
}
```
## Complete CUDA Environment Examples
### Basic CUDA Development
```toml
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_12_8.cuda_nvcc"
cuda_nvcc.priority = 1
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
cuda_cudart.pkg-path = "flox-cuda/cudaPackages.cuda_cudart"
cuda_cudart.priority = 2
cuda_cudart.systems = ["aarch64-linux", "x86_64-linux"]
gcc.pkg-path = "gcc"
gcc-unwrapped.pkg-path = "gcc-unwrapped"
gcc-unwrapped.priority = 5
[vars]
CUDA_VERSION = "12.8"
CUDA_HOME = "$FLOX_ENV"
[hook]
echo "CUDA $CUDA_VERSION environment ready"
echo "nvcc: $(nvcc --version | grep release)"
```
### Deep Learning with PyTorch
```toml
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_12_8.cuda_nvcc"
cuda_nvcc.priority = 1
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
cuda_cudart.pkg-path = "flox-cuda/cudaPackages.cuda_cudart"
cuda_cudart.priority = 2
cuda_cudart.systems = ["aarch64-linux", "x86_64-linux"]
libcublas.pkg-path = "flox-cuda/cudaPackages_12_8.libcublas"
libcublas.priority = 2
libcublas.systems = ["aarch64-linux", "x86_64-linux"]
cudnn.pkg-path = "flox-cuda/cudaPackages_12_8.cudnn_9_11"
cudnn.priority = 2
cudnn.systems = ["aarch64-linux", "x86_64-linux"]
python313Full.pkg-path = "python313Full"
uv.pkg-path = "uv"
gcc-unwrapped.pkg-path = "gcc-unwrapped"
gcc-unwrapped.priority = 5
[vars]
CUDA_VERSION = "12.8"
PYTORCH_CUDA_ALLOC_CONF = "max_split_size_mb:128"
[hook]
setup_pytorch_cuda() {
venv="$FLOX_ENV_CACHE/venv"
if [ ! -d "$venv" ]; then
uv venv "$venv" --python python3
fi
if [ -f "$venv/bin/activate" ]; then
source "$venv/bin/activate"
fi
if [ ! -f "$FLOX_ENV_CACHE/.deps_installed" ]; then
uv pip install --python "$venv/bin/python" \
torch torchvision torchaudio \
--index-url https://download.pytorch.org/whl/cu129
touch "$FLOX_ENV_CACHE/.deps_installed"
fi
}
setup_pytorch_cuda
```
### TensorFlow with CUDA
```toml
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_12_8.cuda_nvcc"
cuda_nvcc.priority = 1
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
cuda_cudart.pkg-path = "flox-cuda/cudaPackages.cuda_cudart"
cuda_cudart.priority = 2
cuda_cudart.systems = ["aarch64-linux", "x86_64-linux"]
cudnn.pkg-path = "flox-cuda/cudaPackages_12_8.cudnn_9_11"
cudnn.priority = 2
cudnn.systems = ["aarch64-linux", "x86_64-linux"]
python313Full.pkg-path = "python313Full"
uv.pkg-path = "uv"
[hook]
setup_tensorflow() {
venv="$FLOX_ENV_CACHE/venv"
[ ! -d "$venv" ] && uv venv "$venv" --python python3
[ -f "$venv/bin/activate" ] && source "$venv/bin/activate"
if [ ! -f "$FLOX_ENV_CACHE/.tf_installed" ]; then
uv pip install --python "$venv/bin/python" tensorflow[and-cuda]
touch "$FLOX_ENV_CACHE/.tf_installed"
fi
}
setup_tensorflow
```
### Multi-GPU Development
```toml
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_12_8.cuda_nvcc"
cuda_nvcc.priority = 1
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
nccl.pkg-path = "flox-cuda/cudaPackages_12_8.nccl"
nccl.priority = 2
nccl.systems = ["aarch64-linux", "x86_64-linux"]
libcublas.pkg-path = "flox-cuda/cudaPackages_12_8.libcublas"
libcublas.priority = 2
libcublas.systems = ["aarch64-linux", "x86_64-linux"]
[vars]
CUDA_VISIBLE_DEVICES = "0,1,2,3" # All GPUs
NCCL_DEBUG = "INFO"
```
## Modular CUDA Environments
### Base CUDA Environment
```toml
# team/cuda-base
[install]
cuda_nvcc.pkg-path = "flox-cuda/cudaPackages_12_8.cuda_nvcc"
cuda_nvcc.priority = 1
cuda_nvcc.systems = ["aarch64-linux", "x86_64-linux"]
cuda_cudart.pkg-path = "flox-cuda/cudaPackages.cuda_cudart"
cuda_cudart.priority = 2
cuda_cudart.systems = ["aarch64-linux", "x86_64-linux"]
gcc.pkg-path = "gcc"
gcc-unwrapped.pkg-path = "gcc-unwrapped"
gcc-unwrapped.priority = 5
[vars]
CUDA_VERSION = "12.8"
CUDA_HOME = "$FLOX_ENV"
```
### CUDA Math Libraries
```toml
# team/cuda-math
[include]
environments = [{ remote = "team/cuda-base" }]
[install]
libcublas.pkg-path = "flox-cuda/cudaPackages_12_8.libcublas"
libcublas.priority = 2
libcublas.systems = ["aarch64-linux", "x86_64-linux"]
libcufft.pkg-path = "flox-cuda/cudaPackages_12_8.libcufft"
libcufft.priority = 2
libcufft.systems = ["aarch64-linux", "x86_64-linux"]
libcurand.pkg-path = "flox-cuda/cudaPackages_12_8.libcurand"
libcurand.priority = 2
libcurand.systems = ["aarch64-linux", "x86_64-linux"]
```
### CUDA Debugging Tools
```toml
# team/cuda-debug
[install]
cuda-gdb.pkg-path = "flox-cuda/cudaPackages_12_8.cuda-gdb"
cuda-gdb.systems = ["aarch64-linux", "x86_64-linux"]
nsight-systems.pkg-path = "flox-cuda/cudaPackages_12_8.nsight-systems"
nsight-systems.systems = ["aarch64-linux", "x86_64-linux"]
[vars]
CUDA_LAUNCH_BLOCKING = "1" # Synchronous kernel launches for debugging
```
### Layer for Development
```bash
# Base CUDA environment
flox activate -r team/cuda-base
# Add debugging tools when needed
flox activate -r team/cuda-base -- flox activate -r team/cuda-debug
```
## Testing CUDA Installation
### Verify CUDA Compiler
```bash
nvcc --version
```
### Check GPU Availability
```bash
nvidia-smi
```
### Compile Test Program
```bash
cat > hello_cuda.cu << 'EOF'
#include <stdio.h>
__global__ void hello() {
printf("Hello from GPU!\n");
}
int main() {
hello<<<1,1>>>();
cudaDeviceSynchronize();
return 0;
}
EOF
nvcc hello_cuda.cu -o hello_cuda
./hello_cuda
```
### Test PyTorch CUDA
```python
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA version: {torch.version.cuda}")
print(f"GPU count: {torch.cuda.device_count()}")
if torch.cuda.is_available():
print(f"GPU name: {torch.cuda.get_device_name(0)}")
```
## Best Practices
### Always Use Priority Values
CUDA packages have predictable conflicts - assign explicit priorities
### Version Consistency
Use specific versions (e.g., `_12_8`) for reproducibility. Don't mix CUDA versions.
### Modular Design
Split base CUDA, math libs, and debugging into separate environments for flexibility
### Test Compilation
Verify `nvcc hello.cu -o hello` works after setup
### Platform Constraints
Always include `systems = ["aarch64-linux", "x86_64-linux"]`
### Memory Management
Set appropriate CUDA memory allocator configs:
```toml
[vars]
PYTORCH_CUDA_ALLOC_CONF = "max_split_size_mb:128"
CUDA_LAUNCH_BLOCKING = "0" # Async by default
```
## Common CUDA Gotchas
### CUDA Toolkit ≠ Complete Toolkit
The cudatoolkit package doesn't include all libraries. Add what you need:
- libcublas for linear algebra
- libcufft for FFT
- cudnn for deep learning
### License Conflicts
Every CUDA package may need explicit priority due to LICENSE file conflicts
### No macOS Support
CUDA is Linux-only. Use Metal-accelerated packages on Darwin when available
### Version Mixing
Don't mix CUDA versions. Use consistent `_X_Y` suffixes across all CUDA packages
### Python Virtual Environments
CUDA Python packages (PyTorch, TensorFlow) should be installed in venv with correct CUDA version
### Driver Requirements
Ensure NVIDIA driver supports your CUDA version. Check with `nvidia-smi`
## Troubleshooting
### CUDA Not Found
```bash
# Check CUDA_HOME
echo $CUDA_HOME
# Check nvcc
which nvcc
nvcc --version
# Check library paths
echo $LD_LIBRARY_PATH
```
### PyTorch Not Using GPU
```python
import torch
print(torch.cuda.is_available()) # Should be True
print(torch.version.cuda) # Should match your CUDA version
# If False, reinstall with correct CUDA version
# uv pip install torch --index-url https://download.pytorch.org/whl/cu129
```
### Compilation Errors
```bash
# Check gcc/g++ version
gcc --version
g++ --version
# Ensure gcc-unwrapped is installed
flox list | grep gcc-unwrapped
# Check include paths
echo $CPATH
echo $LIBRARY_PATH
```
### Runtime Errors
```bash
# Check GPU visibility
echo $CUDA_VISIBLE_DEVICES
# Check for GPU
nvidia-smi
# Run with debug output
CUDA_LAUNCH_BLOCKING=1 python my_script.py
```
## Related Skills
- **flox-environments** - Setting up development environments
- **flox-sharing** - Composing CUDA base with project environments
- **flox-containers** - Containerizing CUDA environments for deployment
- **flox-services** - Running CUDA workloads as services

View File

@@ -0,0 +1,328 @@
---
name: flox-environments
description: Manage reproducible development environments with Flox. **ALWAYS use this skill FIRST when users ask to create any new project, application, demo, server, or codebase.** Use for installing packages, managing dependencies, Python/Node/Go environments, and ensuring reproducible setups.
---
# Flox Environments Guide
## Working Style & Structure
- Use **modular, idempotent bash functions** in hooks
- Never, ever use absolute paths. Flox environments are designed to be reproducible. Use Flox's environment variables instead
- I REPEAT: NEVER, EVER USE ABSOLUTE PATHS. Don't do it. Use `$FLOX_ENV` for environment-specific runtime dependencies; use `$FLOX_ENV_PROJECT` for the project directory
- Name functions descriptively (e.g., `setup_postgres()`)
- Consider using **gum** for styled output when creating environments for interactive use; this is an anti-pattern in CI
- Put persistent data/configs in `$FLOX_ENV_CACHE`
- Return to `$FLOX_ENV_PROJECT` at end of hooks
- Use `mktemp` for temp files, clean up immediately
- Do not over-engineer: e.g., do not create unnecessary echo statements or superfluous comments; do not print unnecessary information displays in `[hook]` or `[profile]`; do not create helper functions or aliases without the user requesting these explicitly
## Configuration & Secrets
- Support `VARIABLE=value flox activate` pattern for runtime overrides
- Never store secrets in manifest; use:
- Environment variables
- `~/.config/<env_name>/` for persistent secrets
- Existing config files (e.g., `~/.aws/credentials`)
## Flox Basics
- Flox is built on Nix; fully Nix-compatible
- Flox uses nixpkgs as its upstream; packages are _usually_ named the same; unlike nixpkgs, Flox Catalog has millions of historical package-version combinations
- Key paths:
- `.flox/env/manifest.toml`: Environment definition
- `.flox/env.json`: Environment metadata
- `$FLOX_ENV_CACHE`: Persistent, local-only storage (survives `flox delete`)
- `$FLOX_ENV_PROJECT`: Project root directory (where .flox/ lives)
- `$FLOX_ENV`: basically the path to `/usr`: contains all the libs, includes, bins, configs, etc. available to a specific flox environment
- Always use `flox init` to create environments
- Manifest changes take effect on next `flox activate` (not live reload)
## Core Commands
```bash
flox init # Create new env
flox search <string> [--all] # Search for a package
flox show <pkg> # Show available historical versions of a package
flox install <pkg> # Add package
flox list [-e | -c | -n | -a] # List installed packages
flox activate # Enter env
flox activate -- <cmd> # Run without subshell
flox edit # Edit manifest interactively
```
## Manifest Structure
- `[install]`: Package list with descriptors
- `[vars]`: Static variables
- `[hook]`: Non-interactive setup scripts
- `[profile]`: Shell-specific functions/aliases
- `[services]`: Service definitions (see flox-services skill)
- `[build]`: Reproducible build commands (see flox-builds skill)
- `[include]`: Compose other environments (see flox-sharing skill)
- `[options]`: Activation mode, supported systems
## The [install] Section
### Package Installation Basics
The `[install]` table specifies packages to install.
```toml
[install]
ripgrep.pkg-path = "ripgrep"
pip.pkg-path = "python310Packages.pip"
```
### Package Descriptors
Each entry has:
- **Key**: Install ID (e.g., `ripgrep`, `pip`) - your reference name for the package
- **Value**: Package descriptor - specifies what to install
### Catalog Descriptors (Most Common)
Options for packages from the Flox catalog:
```toml
[install]
example.pkg-path = "package-name" # Required: location in catalog
example.pkg-group = "mygroup" # Optional: group packages together
example.version = "1.2.3" # Optional: exact or semver range
example.systems = ["x86_64-linux"] # Optional: limit to specific platforms
example.priority = 3 # Optional: resolve file conflicts (lower = higher priority)
```
#### Key Options Explained:
**pkg-path** (required)
- Location in the package catalog
- Can be simple (`"ripgrep"`) or nested (`"python310Packages.pip"`)
- Can use array format: `["python310Packages", "pip"]`
**pkg-group**
- Groups packages that work well together
- Packages without explicit group belong to default group
- Groups upgrade together to maintain compatibility
- Use different groups to avoid version conflicts
**version**
- Exact: `"1.2.3"`
- Semver ranges: `"^1.2"`, `">=2.0"`
- Partial versions act as wildcards: `"1.2"` = latest 1.2.X
**systems**
- Constrains package to specific platforms
- Options: `"x86_64-linux"`, `"x86_64-darwin"`, `"aarch64-linux"`, `"aarch64-darwin"`
- Defaults to manifest's `options.systems` if omitted
**priority**
- Resolves file conflicts between packages
- Default: 5
- Lower number = higher priority wins conflicts
- **Critical for CUDA packages** (see flox-cuda skill)
### Practical Examples
```toml
# Platform-specific Python
[install]
python.pkg-path = "python311Full"
uv.pkg-path = "uv"
systems = ["x86_64-linux", "aarch64-linux"] # Linux only
# Version-pinned with custom priority
[nodejs]
nodejs.pkg-path = "nodejs"
version = "^20.0"
priority = 1 # Takes precedence in conflicts
# Multiple package groups to avoid conflicts
[install]
gcc.pkg-path = "gcc12"
gcc.pkg-group = "stable"
```
## Language-Specific Patterns
### Python Virtual Environments
**venv creation pattern**: Always check existence before activation:
```bash
if [ ! -d "$venv" ]; then
uv venv "$venv" --python python3
fi
# Guard activation - venv creation might not be complete
if [ -f "$venv/bin/activate" ]; then
source "$venv/bin/activate"
fi
```
**Key patterns**:
- **venv location**: Always use `$FLOX_ENV_CACHE/venv` - survives environment rebuilds
- **uv with venv**: Use `uv pip install --python "$venv/bin/python"` NOT `"$venv/bin/python" -m uv`
- **Cache dirs**: Set `UV_CACHE_DIR` and `PIP_CACHE_DIR` to `$FLOX_ENV_CACHE` subdirs
- **Dependency installation flag**: Touch `$FLOX_ENV_CACHE/.deps_installed` to prevent reinstalls
### C/C++ Development
- **Package Names**: `gbenchmark` not `benchmark`, `catch2_3` for Catch2, `gcc13`/`clang_18` for specific versions
- **System Constraints**: Linux-only tools need explicit systems: `valgrind.systems = ["x86_64-linux", "aarch64-linux"]`
- **Essential Groups**: Separate `compilers`, `build`, `debug`, `testing`, `libraries` groups prevent conflicts
- **libstdc++ Access**: ALWAYS include `gcc-unwrapped` for C++ stdlib headers/libs (gcc alone doesn't expose them):
```toml
gcc-unwrapped.pkg-path = "gcc-unwrapped"
gcc-unwrapped.priority = 5
gcc-unwrapped.pkg-group = "libraries"
```
### Node.js Development
- **Package managers**: Install `nodejs` (includes npm); add `yarn` or `pnpm` separately if needed
- **Version pinning**: Use `version = "^20.0"` for LTS, or exact versions for reproducibility
- **Global tools pattern**: Use `npx` for one-off tools, install commonly-used globals in manifest
### Platform-Specific Patterns
```toml
# Darwin-specific frameworks
IOKit.pkg-path = "darwin.apple_sdk.frameworks.IOKit"
IOKit.systems = ["x86_64-darwin", "aarch64-darwin"]
# Platform-preferred compilers
gcc.pkg-path = "gcc"
gcc.systems = ["x86_64-linux", "aarch64-linux"]
clang.pkg-path = "clang"
clang.systems = ["x86_64-darwin", "aarch64-darwin"]
# Darwin GNU compatibility layer
coreutils.pkg-path = "coreutils"
coreutils.systems = ["x86_64-darwin", "aarch64-darwin"]
```
## Best Practices
- Check manifest before installing new packages
- Use `return` not `exit` in hooks
- Define env vars with `${VAR:-default}`
- Use descriptive, prefixed function names in composed envs
- Cache downloads in `$FLOX_ENV_CACHE`
- Test activation with `flox activate -- <command>` before adding to services
- Use `--quiet` flag with uv/pip in hooks to reduce noise
## Editing Manifests Non-Interactively
```bash
flox list -c > /tmp/manifest.toml
# Edit with sed/awk
flox edit -f /tmp/manifest.toml
```
## Common Pitfalls
### Hooks Run Every Activation
Hooks run EVERY activation (keep them fast/idempotent)
### Hook vs Profile Functions
Hook functions are not available to users in the interactive shell; use `[profile]` for user-invokable commands/aliases
### Profile Code in Layered Environments
Profile code runs for each layered/composed environment; keep auto-run display logic in `[hook]` to avoid repetition
### Manifest Syntax Errors
Manifest syntax errors prevent ALL flox commands from working
### Package Search Case Sensitivity
Package search is case-sensitive; use `flox search --all` for broader results
## Troubleshooting Tips
### Package Conflicts
If packages conflict, use different `pkg-group` values or adjust `priority`
### Tricky Dependencies
- If we need `libstdc++`, we get this from the `gcc-unwrapped` package, not from `gcc`
- If user is working with python and requests `uv`, they typically do not mean `uvicorn`; clarify which package user wants
### Hook Issues
- Use `return` not `exit` in hooks
- Define env vars with `${VAR:-default}`
- Guard FLOX_ENV_CACHE usage: `${FLOX_ENV_CACHE:-}` with fallback
## Environment Layering
### What is Layering?
**Layering** is runtime stacking of environments where activate order matters. Each layer runs in its own subshell, preserving isolation while allowing tool composition.
### Core Layering Commands
```bash
# Layer debugging tools on base environment
flox activate -r team/base -- flox activate -r team/debug
# Layer multiple environments
flox activate -r team/db -- flox activate -r team/cache -- flox activate
# Layer local on remote
flox activate -r prod/app -- flox activate
```
### When to Use Layering
- **Ad hoc tool addition**: Add debugging/profiling tools temporarily
- **Development vs production**: Layer dev tools on production environment
- **Flexible composition**: Mix and match environments at runtime
- **Temporary utilities**: Add one-time tools without modifying environment
### Layering Use Cases
**Development tools on production environment:**
```bash
flox activate -r prod/app -- flox activate -r dev/tools
```
**Debugging tools on CUDA environment:**
```bash
flox activate -r team/cuda-base -- flox activate -r team/cuda-debug
```
**Temporary utilities:**
```bash
flox activate -r project/main -- flox activate -r utils/network
```
### Creating Layer-Optimized Environments
**Design for runtime stacking with potential conflicts:**
```toml
[vars]
# Prefix vars to avoid masking
MYAPP_PORT = "8080"
MYAPP_HOST = "localhost"
[profile.common]
# Use unique, prefixed function names
myapp_setup() { ... }
myapp_debug() { ... }
[services.myapp-db] # Prefix service names
command = "..."
```
**Best practices for layerable environments:**
- Single responsibility per environment
- Expect vars/binaries might be overridden by upper layers
- Document what the environment provides/expects
- Keep hooks fast and idempotent
- Use prefixed names to avoid collisions
## Related Skills
- **flox-services** - Running services and background processes
- **flox-builds** - Building and packaging applications
- **flox-publish** - Publishing packages to catalogs
- **flox-sharing** - Environment composition and layering
- **flox-containers** - Containerizing environments
- **flox-cuda** - CUDA/GPU development environments

View File

@@ -0,0 +1,481 @@
---
name: flox-publish
description: Use for publishing user packages to flox for use in Flox environments. Use for package distribution and sharing of builds defined in a flox environment.
---
# Flox Package Publishing Guide
## Core Commands
```bash
flox publish # Publish all packages
flox publish my_package # Publish single package
flox publish -o myorg package # Publish to organization
flox publish -o myuser package # Publish to personal namespace
flox auth login # Authenticate before publishing
```
## Publishing Workflow: Development to Runtime
Publishing packages enables a clear separation between **development** and **runtime/consumption**:
### The Complete Workflow
**Phase 1: Development Environment**
```toml
# .flox/env/manifest.toml (in git with source code)
[install]
gcc.pkg-path = "gcc13"
make.pkg-path = "make"
python.pkg-path = "python311Full"
[build.myapp]
command = '''
python setup.py build
mkdir -p $out/bin
cp build/myapp $out/bin/
'''
version = "1.0.0"
```
Developers work in this environment, commit `.flox/` to git alongside source code.
**Phase 2: Build and Publish**
```bash
# Build the package
flox build myapp
# Publish to catalog
flox publish -o myorg myapp
```
The published package contains BINARIES/ARTIFACTS (what's in `$out/`), NOT source code.
**Phase 3: Runtime Environment**
```toml
# Separate environment (can be pushed to FloxHub)
[install]
myapp.pkg-path = "myorg/myapp" # The published package
```
Consumers create runtime environments and install the published package. No build tools needed, no source code exposed.
**Key insight**: You don't install the published package back into the development environment - that would be circular. Published packages are installed into OTHER environments (different projects, production, etc.).
## Publishing to Flox Catalog
### Prerequisites
Before publishing:
- Package defined in `[build]` section or `.flox/pkgs/`
- Environment in Git repo with configured remote
- Clean working tree (no uncommitted changes)
- Current commit pushed to remote
- All build files tracked by Git
- At least one package installed in `[install]`
### Authentication
Run authentication before first publish:
```bash
flox auth login
```
### Publishing Commands
```bash
# Publish single package
flox publish my_package
# Publish all packages
flox publish
# Publish to organization
flox publish -o myorg my_package
# Publish to personal namespace (for testing)
flox publish -o mypersonalhandle my_package
```
### Catalog Types
**Personal catalogs**: Only visible to you (good for testing)
- Published to your personal namespace
- Example: User "alice" publishes "hello" → available as `alice/hello`
- Useful for testing before publishing to organization
**Organization catalogs**: Shared with team members (paid feature)
- Published to organization namespace
- Example: Org "acme" publishes "tool" → available as `acme/tool`
- All organization members can install
### Build Validation
Flox clones your repo to a temp location and performs a clean build to ensure reproducibility. Only packages that build successfully in this clean environment can be published.
This validation ensures:
- All dependencies are declared
- Build is reproducible
- No reliance on local machine state
- Git repository is clean and up-to-date
### After Publishing
- Package available in `flox search`, `flox show`, `flox install`
- Metadata sent to Flox servers
- Package binaries uploaded to Catalog Store
- Install with: `flox install <catalog>/<package>`
Users can then:
```bash
# Search for your package
flox search my_package
# See package details
flox show myorg/my_package
# Install the package
flox install myorg/my_package
```
### What Gets Published
**Published packages contain:**
- Binaries and compiled artifacts (everything in `$out/`)
- Runtime dependencies specified in `runtime-packages`
- Package metadata (version, description)
**Published packages do NOT contain:**
- Source code (unless explicitly copied to `$out/`)
- Build tools or build-time dependencies
- Development environment configuration
- The `.flox/` directory itself
This separation allows you to share built artifacts without exposing source code.
## Real-world Publishing Workflows
### Application Development Workflow
**Developer workflow:**
1. Create development environment with build tools:
```bash
mkdir myapp && cd myapp
flox init
flox install gcc make python311Full
```
2. Add source code and build definition to `.flox/env/manifest.toml`:
```toml
[build.myapp]
command = '''make && cp myapp $out/bin/'''
version = "1.0.0"
```
3. Commit to git (environment definition + source code):
```bash
git add .flox/ src/
git commit -m "Add development environment and source"
git push origin main
```
4. Build and publish package (binaries/artifacts):
```bash
flox build myapp
flox publish -o myorg myapp
```
**Other developers:**
- Clone repo: `git clone <repo> && cd myapp && flox activate`
- Get the same development environment with build tools
**Consumers:**
- Create new runtime environment: `flox init && flox install myorg/myapp`
- OR install into existing environment: `flox install myorg/myapp`
- Get the BUILT package (binaries), not source code
- Can push runtime environment to FloxHub without exposing source
### Fork-based Development Pattern
1. Fork upstream repo (e.g., `user/project` from `upstream/project`)
2. Add `.flox/` to fork with build definitions
3. Commit and push: `git push origin main`
4. Publish package: `flox publish -o username package-name`
5. Others can install: `flox install username/package-name`
## Versioning Strategies
### Semantic Versioning
```toml
[build.mytool]
version = "1.2.3" # Major.Minor.Patch
description = "My awesome tool"
```
### Git-based Versioning
```toml
[build.mytool]
version.command = "git describe --tags"
description = "My awesome tool"
```
### File-based Versioning
```toml
[build.mytool]
version.file = "VERSION.txt"
description = "My awesome tool"
```
### Dynamic Versioning from Source
```toml
[build.rustapp]
version.command = "cargo metadata --no-deps --format-version 1 | jq -r '.packages[0].version'"
```
## Publishing Multiple Variants
You can publish multiple variants of the same project:
```toml
[build.myapp]
command = '''
cargo build --release
mkdir -p $out/bin
cp target/release/myapp $out/bin/
'''
version = "1.0.0"
description = "Production build"
sandbox = "pure"
[build.myapp-debug]
command = '''
cargo build
mkdir -p $out/bin
cp target/debug/myapp $out/bin/myapp-debug
'''
version = "1.0.0"
description = "Debug build with symbols"
sandbox = "off"
```
Both can be published and users can choose which to install.
## Testing Before Publishing
### Local Testing
1. Build the package:
```bash
flox build myapp
```
2. Test the built artifact:
```bash
./result-myapp/bin/myapp --version
```
3. Install locally to test:
```bash
flox install ./result-myapp
```
### Personal Catalog Testing
Publish to your personal namespace first:
```bash
flox publish -o myusername myapp
```
Then test installation:
```bash
flox install myusername/myapp
```
Once validated, republish to organization:
```bash
flox publish -o myorg myapp
```
## Common Gotchas
### Branch names
Many repos use `master` not `main` - check with `git branch`
### Auth required
Run `flox auth login` before first publish
### Clean git state
Commit and push ALL changes before `flox publish`:
```bash
git status # Check for uncommitted changes
git add .flox/
git commit -m "Add flox build configuration"
git push origin master
```
### runtime-packages
List only what package needs at runtime, not build deps:
```toml
[install]
gcc.pkg-path = "gcc"
make.pkg-path = "make"
[build.myapp]
command = '''make && cp myapp $out/bin/'''
runtime-packages = [] # No runtime deps needed
```
### Git-tracked files only
All files referenced in build must be tracked:
```bash
git add .flox/pkgs/*
git add src/
git commit -m "Add build files"
```
## Publishing Nix Expression Builds
For Nix expression builds in `.flox/pkgs/`:
1. Create the Nix expression:
```bash
mkdir -p .flox/pkgs
cat > .flox/pkgs/hello.nix << 'EOF'
{ hello }:
hello.overrideAttrs (oldAttrs: {
patches = (oldAttrs.patches or []) ++ [ ./my.patch ];
})
EOF
```
2. Track with Git:
```bash
git add .flox/pkgs/*
git commit -m "Add hello package"
git push
```
3. Publish:
```bash
flox publish hello
```
## Publishing Configuration and Assets
You can publish non-code artifacts:
### Configuration templates
```toml
[build.nginx-config]
command = '''
mkdir -p $out/etc
cp nginx.conf $out/etc/
cp -r conf.d $out/etc/
'''
version = "1.0.0"
description = "Organization Nginx configuration"
```
### Protocol buffers
```toml
[build.api-proto]
command = '''
mkdir -p $out/share/proto
cp proto/**/*.proto $out/share/proto/
'''
version = "2.1.0"
description = "API protocol definitions"
```
Teams install and reference via `$FLOX_ENV/etc/` or `$FLOX_ENV/share/`.
## Continuous Integration Publishing
### GitHub Actions Example
```yaml
name: Publish to Flox
on:
push:
tags:
- 'v*'
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Flox
run: |
curl -fsSL https://downloads.flox.dev/by-env/stable/install | bash
- name: Authenticate
env:
FLOX_AUTH_TOKEN: ${{ secrets.FLOX_AUTH_TOKEN }}
run: flox auth login --token "$FLOX_AUTH_TOKEN"
- name: Publish package
run: flox publish -o myorg mypackage
```
### GitLab CI Example
```yaml
publish:
stage: deploy
only:
- tags
script:
- curl -fsSL https://downloads.flox.dev/by-env/stable/install | bash
- flox auth login --token "$FLOX_AUTH_TOKEN"
- flox publish -o myorg mypackage
```
## Package Metadata Best Practices
### Good Descriptions
```toml
[build.cli]
description = "High-performance log shipper with filtering" # Good: specific, descriptive
# Avoid:
# description = "My tool" # Too vague
# description = "CLI" # Not descriptive enough
```
### Proper Versioning
- Use semantic versioning: MAJOR.MINOR.PATCH
- Increment MAJOR for breaking changes
- Increment MINOR for new features
- Increment PATCH for bug fixes
### Runtime Dependencies
Only include what's actually needed at runtime:
```toml
[install]
# Build-time only
gcc.pkg-path = "gcc"
make.pkg-path = "make"
# Runtime dependency
libssl.pkg-path = "openssl"
[build.myapp]
runtime-packages = ["libssl"] # Only runtime deps
```
## Related Skills
- **flox-builds** - Building packages before publishing, dual-environment workflow
- **flox-environments** - Setting up development and runtime environments
- **flox-sharing** - Sharing environment definitions (via git or FloxHub) vs publishing packages (binaries/artifacts)

View File

@@ -0,0 +1,298 @@
---
name: flox-services
description: Running services and background processes in Flox environments. Use for service configuration, network services, logging, database setup, and service debugging.
---
# Flox Services Guide
## Running Services in Flox Environments
- Start with `flox activate --start-services` or `flox activate -s`
- Define `is-daemon`, `shutdown.command` for background processes
- Keep services running using `tail -f /dev/null`
- Use `flox services status/logs/restart` to manage (must be in activated env)
- Service commands don't inherit hook activations; explicitly source/activate what you need
## Core Commands
```bash
flox activate -s # Start services
flox services status # Check service status
flox services logs <service> # View service logs
flox services restart <service> # Restart a service
flox services stop <service> # Stop a service
```
## Network Services Pattern
Always make host/port configurable via vars:
```toml
[services.webapp]
command = '''exec app --host "$APP_HOST" --port "$APP_PORT"'''
[vars]
APP_HOST = "0.0.0.0" # Network-accessible
APP_PORT = "8080"
```
## Service Logging Pattern
Always pipe to `$FLOX_ENV_CACHE/logs/` for debugging:
```toml
[services.myapp]
command = '''
mkdir -p "$FLOX_ENV_CACHE/logs"
exec app 2>&1 | tee -a "$FLOX_ENV_CACHE/logs/app.log"
'''
```
## Python venv Pattern for Services
Services must activate venv independently:
```toml
[services.myapp]
command = '''
[ -f "$FLOX_ENV_CACHE/venv/bin/activate" ] && \
source "$FLOX_ENV_CACHE/venv/bin/activate"
exec python-app "$@"
'''
```
Or use venv Python directly:
```toml
[services.myapp]
command = '''exec "$FLOX_ENV_CACHE/venv/bin/python" app.py'''
```
## Using Packaged Services
Override package's service by redefining with same name.
## Database Service Examples
### PostgreSQL
```toml
[services.postgres]
command = '''
mkdir -p "$FLOX_ENV_CACHE/postgres"
if [ ! -d "$FLOX_ENV_CACHE/postgres/data" ]; then
initdb -D "$FLOX_ENV_CACHE/postgres/data"
fi
exec postgres -D "$FLOX_ENV_CACHE/postgres/data" \
-k "$FLOX_ENV_CACHE/postgres" \
-h "$POSTGRES_HOST" \
-p "$POSTGRES_PORT"
'''
is-daemon = true
[vars]
POSTGRES_HOST = "localhost"
POSTGRES_PORT = "5432"
POSTGRES_USER = "myuser"
POSTGRES_DB = "mydb"
```
### Redis
```toml
[services.redis]
command = '''
mkdir -p "$FLOX_ENV_CACHE/redis"
exec redis-server \
--bind "$REDIS_HOST" \
--port "$REDIS_PORT" \
--dir "$FLOX_ENV_CACHE/redis"
'''
is-daemon = true
[vars]
REDIS_HOST = "127.0.0.1"
REDIS_PORT = "6379"
```
### MongoDB
```toml
[services.mongodb]
command = '''
mkdir -p "$FLOX_ENV_CACHE/mongodb"
exec mongod \
--dbpath "$FLOX_ENV_CACHE/mongodb" \
--bind_ip "$MONGODB_HOST" \
--port "$MONGODB_PORT"
'''
is-daemon = true
[vars]
MONGODB_HOST = "127.0.0.1"
MONGODB_PORT = "27017"
```
## Web Server Examples
### Node.js Development Server
```toml
[services.dev-server]
command = '''
exec npm run dev -- --host "$DEV_HOST" --port "$DEV_PORT"
'''
[vars]
DEV_HOST = "0.0.0.0"
DEV_PORT = "3000"
```
### Python Flask/FastAPI
```toml
[services.api]
command = '''
source "$FLOX_ENV_CACHE/venv/bin/activate"
exec python -m uvicorn main:app \
--host "$API_HOST" \
--port "$API_PORT" \
--reload
'''
[vars]
API_HOST = "0.0.0.0"
API_PORT = "8000"
```
### Simple HTTP Server
```toml
[services.web]
command = '''exec python -m http.server "$WEB_PORT"'''
[vars]
WEB_PORT = "8000"
```
## Environment Variable Convention
Use variables like `POSTGRES_HOST`, `POSTGRES_PORT` to define where services run.
These store connection details *separately*:
- `*_HOST` is the hostname or IP address (e.g., `localhost`, `db.example.com`)
- `*_PORT` is the network port number (e.g., `5432`, `6379`)
This pattern ensures users can override them at runtime:
```bash
POSTGRES_HOST=db.internal POSTGRES_PORT=6543 flox activate -s
```
Use consistent naming across services so the meaning is clear to any system or person reading the variables.
## Service with Shutdown Command
```toml
[services.myapp]
command = '''exec myapp start'''
is-daemon = true
[services.myapp.shutdown]
command = '''myapp stop'''
```
## Dependent Services
Services can wait for other services to be ready:
```toml
[services.db]
command = '''exec postgres -D "$FLOX_ENV_CACHE/postgres"'''
is-daemon = true
[services.api]
command = '''
# Wait for database
until pg_isready -h localhost -p 5432; do
sleep 1
done
exec python -m uvicorn main:app
'''
[vars]
POSTGRES_HOST = "localhost"
POSTGRES_PORT = "5432"
```
## Service Health Checks
```toml
[services.api]
command = '''
# Health check function
health_check() {
curl -sf http://localhost:8000/health > /dev/null
}
exec python -m uvicorn main:app --host 0.0.0.0 --port 8000
'''
```
## Best Practices
- Log service output to `$FLOX_ENV_CACHE/logs/`
- Test activation with `flox activate -- <command>` before adding to services
- When debugging services, run the exact command from manifest manually first
- Always make host/port configurable via vars for network services
- Use `exec` to replace the shell process with the service command
- Services must activate venv inside service command, not rely on hook activation
- Use `is-daemon = true` for background processes that should detach
## Debugging Service Issues
### Check Service Status
```bash
flox services status
```
### View Service Logs
```bash
flox services logs myservice
```
### Run Service Command Manually
```bash
flox activate
# Copy the exact command from manifest and run it
```
### Check if Service is Listening
```bash
# Check if port is open
lsof -i :8000
netstat -an | grep 8000
# Test connection
curl http://localhost:8000
nc -zv localhost 8000
```
## Common Pitfalls
### Services Don't Preserve State
Services see fresh environment (no preserved state between restarts). Store persistent data in `$FLOX_ENV_CACHE`.
### Service Commands Don't Inherit Hook Activations
Explicitly source/activate what you need inside the service command.
### Forgetting to Create Directories
Always `mkdir -p` for data directories in service commands.
### Port Conflicts
Use configurable ports via variables to avoid conflicts with other services.
## Related Skills
- **flox-environments** - Environment basics and package installation
- **flox-sharing** - Composing environments with shared services
- **flox-containers** - Running services in containers

View File

@@ -0,0 +1,407 @@
---
name: flox-sharing
description: Sharing and composing Flox environments. Use for environment composition, remote environments, FloxHub, and team collaboration patterns.
---
# Flox Environment Sharing & Composition Guide
## Core Concepts
**Composition**: Build-time merging of environments (deterministic)
**Remote Environments**: Shared environments via FloxHub
**Team Collaboration**: Reusable, shareable environment stacks
## Understanding Environment Sharing
**The `.flox/` directory contains the environment definition**:
- Package specifications and versions
- Environment variables
- Build definitions
- Hooks and services configuration
**The environment definition does NOT include**:
- Built binaries/artifacts (those are created by builds and can be published as packages)
- Local data or cache
**Two sharing mechanisms**:
1. **Git**: Commit `.flox/` directory to git. When used with development environments, this is typically alongside your source code in the same repository. Other developers clone the repo and get both the environment definition and source code.
2. **FloxHub**: Push environment definition only using `flox push`. This shares ONLY the `.flox/` directory, not any source code or other files. Useful for runtime environments or shared base environments used across multiple projects.
**This is different from publishing packages** (see **flox-publish** skill), where you build and distribute the actual binaries/artifacts.
## Core Commands
```bash
# Activate remote environment
flox activate -r owner/environment-name
# Pull remote environment locally
flox pull owner/environment-name
# Push local environment to FloxHub
flox push
# Compose environments in manifest
# (see [include] section below)
```
## Environment Composition
### Basic Composition
Merge environments at build time using `[include]`:
```toml
[include]
environments = [
{ remote = "team/postgres" },
{ remote = "team/redis" },
{ remote = "team/python-base" }
]
```
### Creating Composition-Optimized Environments
**Design for clean merging at build time:**
```toml
[install]
# Use pkg-groups to prevent conflicts
gcc.pkg-path = "gcc"
gcc.pkg-group = "compiler"
[vars]
# Never duplicate var names across composed envs
POSTGRES_PORT = "5432" # Not "PORT"
[hook]
# Check if setup already done (idempotent)
setup_postgres() {
[ -d "$FLOX_ENV_CACHE/postgres" ] || init_db
}
```
**Best practices:**
- No overlapping vars, services, or function names
- Use explicit, namespaced naming (e.g., `postgres_init` not `init`)
- Minimal hook logic (composed envs run ALL hooks)
- Avoid auto-run logic in `[profile]` (runs once per layer/composition; help displays will repeat)
- Test composability: `flox activate` each env standalone first
### Composition Example: Full Stack
```toml
# .flox/env/manifest.toml
[include]
environments = [
{ remote = "team/postgres" },
{ remote = "team/redis" },
{ remote = "team/nodejs" },
{ remote = "team/monitoring" }
]
[vars]
# Override composed environment variables
POSTGRES_HOST = "localhost"
POSTGRES_PORT = "5433" # Non-standard port
```
### Use Cases for Composition
**Reproducible stacks:**
```toml
[include]
environments = [
{ remote = "team/cuda-base" },
{ remote = "team/cuda-math" },
{ remote = "team/python-ml" }
]
```
**Shared base configuration:**
```toml
[include]
environments = [
{ remote = "org/standards" }, # Company-wide settings
{ remote = "team/backend" } # Team-specific tools
]
```
## Creating Dual-Purpose Environments
**Design for both layering and composition:**
```toml
[install]
# Clear package groups
python.pkg-path = "python311"
python.pkg-group = "runtime"
[vars]
# Namespace everything
MYPROJECT_VERSION = "1.0"
MYPROJECT_CONFIG = "$FLOX_ENV_CACHE/config"
[profile.common]
# Defensive function definitions
if ! type myproject_init >/dev/null 2>&1; then
myproject_init() { ... }
fi
```
## Remote Environments
### Activating Remote Environments
```bash
# Activate remote environment directly
flox activate -r owner/environment-name
# Activate and run a command
flox activate -r owner/environment-name -- npm test
```
### Pulling Remote Environments
```bash
# Pull to work on locally
flox pull owner/environment-name
# Now it's in your local .flox/
flox activate
```
### Pushing Environments to FloxHub
```bash
# Initialize Git repo if needed
git init
git add .flox/
git commit -m "Initial environment"
# Push to FloxHub
flox push
# Others can now activate with:
# flox activate -r yourusername/your-repo
```
### Choosing Between Git and FloxHub
**Commit `.flox/` to Git when:**
- Environment is for development (includes build tools)
- Environment lives alongside source code
- You want version control history for environment changes
- Team already uses git for collaboration
**Push to FloxHub when:**
- Environment is for runtime/production (no source code needed)
- Creating shared base environments used across multiple projects
- Environment needs to be independently versioned from source code
- You want to share environment without exposing source code
**Recommended pattern**: Commit development environments to git with source code; push runtime environments to FloxHub.
## Team Collaboration Patterns
### Base + Specialization
**Create base environment:**
```toml
# team/base
[install]
git.pkg-path = "git"
gh.pkg-path = "gh"
jq.pkg-path = "jq"
[vars]
ORG_REGISTRY = "registry.company.com"
```
**Specialize for teams:**
```toml
# team/frontend
[include]
environments = [{ remote = "team/base" }]
[install]
nodejs.pkg-path = "nodejs"
pnpm.pkg-path = "pnpm"
```
```toml
# team/backend
[include]
environments = [{ remote = "team/base" }]
[install]
python.pkg-path = "python311Full"
uv.pkg-path = "uv"
```
### Service Libraries
**Create reusable service environments:**
```toml
# team/postgres-service
[install]
postgresql.pkg-path = "postgresql"
[services.postgres]
command = '''
mkdir -p "$FLOX_ENV_CACHE/postgres"
if [ ! -d "$FLOX_ENV_CACHE/postgres/data" ]; then
initdb -D "$FLOX_ENV_CACHE/postgres/data"
fi
exec postgres -D "$FLOX_ENV_CACHE/postgres/data" \
-h "$POSTGRES_HOST" -p "$POSTGRES_PORT"
'''
is-daemon = true
[vars]
POSTGRES_HOST = "localhost"
POSTGRES_PORT = "5432"
```
**Compose into projects:**
```toml
# my-project
[include]
environments = [
{ remote = "team/postgres-service" },
{ remote = "team/redis-service" }
]
```
### Development vs Runtime Environments
**Development environment (for building):**
```toml
# project-dev (committed to git with source code)
[install]
gcc.pkg-path = "gcc13"
make.pkg-path = "make"
debugpy.pkg-path = "python311Packages.debugpy"
pytest.pkg-path = "python311Packages.pytest"
[build.myapp]
command = '''
make release
mkdir -p $out/bin
cp build/myapp $out/bin/
'''
version = "1.0.0"
[vars]
DEBUG = "true"
LOG_LEVEL = "debug"
```
Developers commit this `.flox/` directory to git with the source code. Other developers `git clone` and `flox activate` to get the same development environment.
**Runtime environment (for consuming):**
```toml
# project-runtime (pushed to FloxHub, no source code)
[install]
myapp.pkg-path = "myorg/myapp" # Published package, not source
[vars]
DEBUG = "false"
LOG_LEVEL = "info"
MYAPP_CONFIG = "$FLOX_ENV_CACHE/config"
```
After publishing `myapp`, consumers create this runtime environment and install the published package. The runtime environment can be pushed to FloxHub and shared without exposing source code.
**Key distinction**: Development environments contain build tools and source code; runtime environments contain published packages (binaries/artifacts).
(See **flox-environments** skill for layering environments at runtime)
## Composition with Local Packages
Combine composed environments with local packages:
```toml
# Compose base services
[include]
environments = [
{ remote = "team/database-services" },
{ remote = "team/cache-services" }
]
# Add project-specific packages
[install]
myapp.pkg-path = "company/myapp"
```
See **flox-environments** skill for layering environments at runtime.
## Best Practices
### For Shareable Environments
1. **Use descriptive names**: `team/postgres-service` not `db`
2. **Document expectations**: What vars/ports/services are provided
3. **Namespace everything**: Prefix vars, functions, services
4. **Keep focused**: One responsibility per environment
5. **Test standalone**: `flox activate` should work without composition
### For Composed Environments
1. **No name collisions**: Check for overlapping vars/services
2. **Idempotent hooks**: Can run multiple times safely
3. **Minimal auto-run**: Avoid output in `[profile]`
4. **Clear dependencies**: Document what environments are needed
(For layering best practices, see **flox-environments** skill)
## Version Management
### Pin Specific Versions
```toml
[include]
environments = [
{ remote = "team/base", version = "v1.2.3" }
]
```
### Use Latest
```toml
[include]
environments = [
{ remote = "team/base" } # Uses latest
]
```
## Troubleshooting
### Conflicts in Composition
If composed environments conflict:
1. Use different `pkg-group` values
2. Adjust `priority` for file conflicts
3. Namespace variables to avoid collisions
4. Test each environment standalone first
(For layering troubleshooting, see **flox-environments** skill)
### Remote Environment Not Found
```bash
# Check available remote environments
flox search --remote owner/
# Pull and inspect locally
flox pull owner/environment-name
flox list -c
```
## Related Skills
- **flox-environments** - Creating base environments
- **flox-services** - Sharing service configurations
- **flox-containers** - Deploying shared environments
- **flox-publish** - Publishing built packages (binaries/artifacts) vs sharing environments (definitions only)