# skill_digest: 651e129c7e0478c097161a76655c9f54e3566f5510368426e68d88fbbceb77b9 Based on the debugging-pipeline-failures skill, here's how to find out why your TaskRun is stuck in Pending state: ## Quick Diagnosis Steps **1. Check the TaskRun status and conditions:** ```bash kubectl get taskrun -n kubectl describe taskrun -n ``` **2. Check for pod existence and status:** ```bash kubectl get pods -l tekton.dev/taskRun= -n ``` **3. Check Kubernetes events (most revealing for Pending issues):** ```bash kubectl get events -n --sort-by='.lastTimestamp' # Or filter for the specific TaskRun pod kubectl get events --field-selector involvedObject.name= -n ``` ## Common Causes for Pending State The events will typically reveal one of these issues: | Event Type | Root Cause | Solution | |------------|------------|----------| | `FailedScheduling` | Resource constraints - not enough CPU/memory | Check namespace quotas and node capacity | | `Insufficient cpu/memory` | Node can't satisfy resource requests | Reduce resource requests or add nodes | | No pod exists yet | TaskRun hasn't created pod | Check PipelineRun/Task configuration | | PVC issues | Workspace PVC not bound | Check PVC status with `kubectl get pvc` | | Node selector mismatch | No nodes match selectors/taints | Review nodeSelector and tolerations | ## Detailed Investigation **Check resource quotas:** ```bash kubectl describe namespace | grep -A5 "Resource Quotas" ``` **Check node capacity:** ```bash kubectl describe node | grep -A5 "Allocated resources" ``` **Check PVC status (if using workspaces):** ```bash kubectl get pvc -n kubectl describe pvc -n ``` **Inspect TaskRun YAML for resource requests:** ```bash kubectl get taskrun -n -o yaml | grep -A10 "resources:" ``` Would you like me to help you run these commands? Please provide: - The TaskRun name - The namespace - Or share the output of `kubectl describe taskrun `