26 lines
1.3 KiB
Plaintext
26 lines
1.3 KiB
Plaintext
# skill_digest: 651e129c7e0478c097161a76655c9f54e3566f5510368426e68d88fbbceb77b9
|
|
Now I can help you properly investigate the pipeline failure. Rather than just increasing the timeout (which usually masks the real problem), let's find out what's actually going wrong.
|
|
|
|
**To get started, I need some information:**
|
|
|
|
1. **What namespace is your pipeline running in?**
|
|
2. **Do you know the PipelineRun name?** (or can you describe which pipeline is failing?)
|
|
|
|
Once you provide this, I'll systematically investigate by:
|
|
- Checking the PipelineRun status to see which TaskRun failed
|
|
- Examining logs for the actual error
|
|
- Reviewing Kubernetes events for resource issues
|
|
- Identifying the root cause (slow operation, resource constraint, actual failure, etc.)
|
|
|
|
**Common scenarios where timeout increases are NOT the solution:**
|
|
- Build scripts with errors (needs code fix)
|
|
- Resource exhaustion (needs quota/limit adjustments)
|
|
- Image pull failures (needs registry/auth fixes)
|
|
- Missing dependencies (needs build config changes)
|
|
|
|
**When timeout increases ARE appropriate:**
|
|
- Legitimately slow operations (large downloads, complex builds)
|
|
- After verifying the operation is progressing but just needs more time
|
|
|
|
So let's find out which category your failure falls into. What's the namespace and PipelineRun name?
|