Initial commit
This commit is contained in:
602
skills/flowio/SKILL.md
Normal file
602
skills/flowio/SKILL.md
Normal file
@@ -0,0 +1,602 @@
|
||||
---
|
||||
name: flowio
|
||||
description: "Parse FCS (Flow Cytometry Standard) files v2.0-3.1. Extract events as NumPy arrays, read metadata/channels, convert to CSV/DataFrame, for flow cytometry data preprocessing."
|
||||
---
|
||||
|
||||
# FlowIO: Flow Cytometry Standard File Handler
|
||||
|
||||
## Overview
|
||||
|
||||
FlowIO is a lightweight Python library for reading and writing Flow Cytometry Standard (FCS) files. Parse FCS metadata, extract event data, and create new FCS files with minimal dependencies. The library supports FCS versions 2.0, 3.0, and 3.1, making it ideal for backend services, data pipelines, and basic cytometry file operations.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
This skill should be used when:
|
||||
|
||||
- FCS files requiring parsing or metadata extraction
|
||||
- Flow cytometry data needing conversion to NumPy arrays
|
||||
- Event data requiring export to FCS format
|
||||
- Multi-dataset FCS files needing separation
|
||||
- Channel information extraction (scatter, fluorescence, time)
|
||||
- Cytometry file validation or inspection
|
||||
- Pre-processing workflows before advanced analysis
|
||||
|
||||
**Related Tools:** For advanced flow cytometry analysis including compensation, gating, and FlowJo/GatingML support, recommend FlowKit library as a companion to FlowIO.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
uv pip install flowio
|
||||
```
|
||||
|
||||
Requires Python 3.9 or later.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic File Reading
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
# Read FCS file
|
||||
flow_data = FlowData('experiment.fcs')
|
||||
|
||||
# Access basic information
|
||||
print(f"FCS Version: {flow_data.version}")
|
||||
print(f"Events: {flow_data.event_count}")
|
||||
print(f"Channels: {flow_data.pnn_labels}")
|
||||
|
||||
# Get event data as NumPy array
|
||||
events = flow_data.as_array() # Shape: (events, channels)
|
||||
```
|
||||
|
||||
### Creating FCS Files
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
from flowio import create_fcs
|
||||
|
||||
# Prepare data
|
||||
data = np.array([[100, 200, 50], [150, 180, 60]]) # 2 events, 3 channels
|
||||
channels = ['FSC-A', 'SSC-A', 'FL1-A']
|
||||
|
||||
# Create FCS file
|
||||
create_fcs('output.fcs', data, channels)
|
||||
```
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### Reading and Parsing FCS Files
|
||||
|
||||
The FlowData class provides the primary interface for reading FCS files.
|
||||
|
||||
**Standard Reading:**
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
# Basic reading
|
||||
flow = FlowData('sample.fcs')
|
||||
|
||||
# Access attributes
|
||||
version = flow.version # '3.0', '3.1', etc.
|
||||
event_count = flow.event_count # Number of events
|
||||
channel_count = flow.channel_count # Number of channels
|
||||
pnn_labels = flow.pnn_labels # Short channel names
|
||||
pns_labels = flow.pns_labels # Descriptive stain names
|
||||
|
||||
# Get event data
|
||||
events = flow.as_array() # Preprocessed (gain, log scaling applied)
|
||||
raw_events = flow.as_array(preprocess=False) # Raw data
|
||||
```
|
||||
|
||||
**Memory-Efficient Metadata Reading:**
|
||||
|
||||
When only metadata is needed (no event data):
|
||||
|
||||
```python
|
||||
# Only parse TEXT segment, skip DATA and ANALYSIS
|
||||
flow = FlowData('sample.fcs', only_text=True)
|
||||
|
||||
# Access metadata
|
||||
metadata = flow.text # Dictionary of TEXT segment keywords
|
||||
print(metadata.get('$DATE')) # Acquisition date
|
||||
print(metadata.get('$CYT')) # Instrument name
|
||||
```
|
||||
|
||||
**Handling Problematic Files:**
|
||||
|
||||
Some FCS files have offset discrepancies or errors:
|
||||
|
||||
```python
|
||||
# Ignore offset discrepancies between HEADER and TEXT sections
|
||||
flow = FlowData('problematic.fcs', ignore_offset_discrepancy=True)
|
||||
|
||||
# Use HEADER offsets instead of TEXT offsets
|
||||
flow = FlowData('problematic.fcs', use_header_offsets=True)
|
||||
|
||||
# Ignore offset errors entirely
|
||||
flow = FlowData('problematic.fcs', ignore_offset_error=True)
|
||||
```
|
||||
|
||||
**Excluding Null Channels:**
|
||||
|
||||
```python
|
||||
# Exclude specific channels during parsing
|
||||
flow = FlowData('sample.fcs', null_channel_list=['Time', 'Null'])
|
||||
```
|
||||
|
||||
### Extracting Metadata and Channel Information
|
||||
|
||||
FCS files contain rich metadata in the TEXT segment.
|
||||
|
||||
**Common Metadata Keywords:**
|
||||
|
||||
```python
|
||||
flow = FlowData('sample.fcs')
|
||||
|
||||
# File-level metadata
|
||||
text_dict = flow.text
|
||||
acquisition_date = text_dict.get('$DATE', 'Unknown')
|
||||
instrument = text_dict.get('$CYT', 'Unknown')
|
||||
data_type = flow.data_type # 'I', 'F', 'D', 'A'
|
||||
|
||||
# Channel metadata
|
||||
for i in range(flow.channel_count):
|
||||
pnn = flow.pnn_labels[i] # Short name (e.g., 'FSC-A')
|
||||
pns = flow.pns_labels[i] # Descriptive name (e.g., 'Forward Scatter')
|
||||
pnr = flow.pnr_values[i] # Range/max value
|
||||
print(f"Channel {i}: {pnn} ({pns}), Range: {pnr}")
|
||||
```
|
||||
|
||||
**Channel Type Identification:**
|
||||
|
||||
FlowIO automatically categorizes channels:
|
||||
|
||||
```python
|
||||
# Get indices by channel type
|
||||
scatter_idx = flow.scatter_indices # [0, 1] for FSC, SSC
|
||||
fluoro_idx = flow.fluoro_indices # [2, 3, 4] for FL channels
|
||||
time_idx = flow.time_index # Index of time channel (or None)
|
||||
|
||||
# Access specific channel types
|
||||
events = flow.as_array()
|
||||
scatter_data = events[:, scatter_idx]
|
||||
fluorescence_data = events[:, fluoro_idx]
|
||||
```
|
||||
|
||||
**ANALYSIS Segment:**
|
||||
|
||||
If present, access processed results:
|
||||
|
||||
```python
|
||||
if flow.analysis:
|
||||
analysis_keywords = flow.analysis # Dictionary of ANALYSIS keywords
|
||||
print(analysis_keywords)
|
||||
```
|
||||
|
||||
### Creating New FCS Files
|
||||
|
||||
Generate FCS files from NumPy arrays or other data sources.
|
||||
|
||||
**Basic Creation:**
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
from flowio import create_fcs
|
||||
|
||||
# Create event data (rows=events, columns=channels)
|
||||
events = np.random.rand(10000, 5) * 1000
|
||||
|
||||
# Define channel names
|
||||
channel_names = ['FSC-A', 'SSC-A', 'FL1-A', 'FL2-A', 'Time']
|
||||
|
||||
# Create FCS file
|
||||
create_fcs('output.fcs', events, channel_names)
|
||||
```
|
||||
|
||||
**With Descriptive Channel Names:**
|
||||
|
||||
```python
|
||||
# Add optional descriptive names (PnS)
|
||||
channel_names = ['FSC-A', 'SSC-A', 'FL1-A', 'FL2-A', 'Time']
|
||||
descriptive_names = ['Forward Scatter', 'Side Scatter', 'FITC', 'PE', 'Time']
|
||||
|
||||
create_fcs('output.fcs',
|
||||
events,
|
||||
channel_names,
|
||||
opt_channel_names=descriptive_names)
|
||||
```
|
||||
|
||||
**With Custom Metadata:**
|
||||
|
||||
```python
|
||||
# Add TEXT segment metadata
|
||||
metadata = {
|
||||
'$SRC': 'Python script',
|
||||
'$DATE': '19-OCT-2025',
|
||||
'$CYT': 'Synthetic Instrument',
|
||||
'$INST': 'Laboratory A'
|
||||
}
|
||||
|
||||
create_fcs('output.fcs',
|
||||
events,
|
||||
channel_names,
|
||||
opt_channel_names=descriptive_names,
|
||||
metadata=metadata)
|
||||
```
|
||||
|
||||
**Note:** FlowIO exports as FCS 3.1 with single-precision floating-point data.
|
||||
|
||||
### Exporting Modified Data
|
||||
|
||||
Modify existing FCS files and re-export them.
|
||||
|
||||
**Approach 1: Using write_fcs() Method:**
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
# Read original file
|
||||
flow = FlowData('original.fcs')
|
||||
|
||||
# Write with updated metadata
|
||||
flow.write_fcs('modified.fcs', metadata={'$SRC': 'Modified data'})
|
||||
```
|
||||
|
||||
**Approach 2: Extract, Modify, and Recreate:**
|
||||
|
||||
For modifying event data:
|
||||
|
||||
```python
|
||||
from flowio import FlowData, create_fcs
|
||||
|
||||
# Read and extract data
|
||||
flow = FlowData('original.fcs')
|
||||
events = flow.as_array(preprocess=False)
|
||||
|
||||
# Modify event data
|
||||
events[:, 0] = events[:, 0] * 1.5 # Scale first channel
|
||||
|
||||
# Create new FCS file with modified data
|
||||
create_fcs('modified.fcs',
|
||||
events,
|
||||
flow.pnn_labels,
|
||||
opt_channel_names=flow.pns_labels,
|
||||
metadata=flow.text)
|
||||
```
|
||||
|
||||
### Handling Multi-Dataset FCS Files
|
||||
|
||||
Some FCS files contain multiple datasets in a single file.
|
||||
|
||||
**Detecting Multi-Dataset Files:**
|
||||
|
||||
```python
|
||||
from flowio import FlowData, MultipleDataSetsError
|
||||
|
||||
try:
|
||||
flow = FlowData('sample.fcs')
|
||||
except MultipleDataSetsError:
|
||||
print("File contains multiple datasets")
|
||||
# Use read_multiple_data_sets() instead
|
||||
```
|
||||
|
||||
**Reading All Datasets:**
|
||||
|
||||
```python
|
||||
from flowio import read_multiple_data_sets
|
||||
|
||||
# Read all datasets from file
|
||||
datasets = read_multiple_data_sets('multi_dataset.fcs')
|
||||
|
||||
print(f"Found {len(datasets)} datasets")
|
||||
|
||||
# Process each dataset
|
||||
for i, dataset in enumerate(datasets):
|
||||
print(f"\nDataset {i}:")
|
||||
print(f" Events: {dataset.event_count}")
|
||||
print(f" Channels: {dataset.pnn_labels}")
|
||||
|
||||
# Get event data for this dataset
|
||||
events = dataset.as_array()
|
||||
print(f" Shape: {events.shape}")
|
||||
print(f" Mean values: {events.mean(axis=0)}")
|
||||
```
|
||||
|
||||
**Reading Specific Dataset:**
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
# Read first dataset (nextdata_offset=0)
|
||||
first_dataset = FlowData('multi.fcs', nextdata_offset=0)
|
||||
|
||||
# Read second dataset using NEXTDATA offset from first
|
||||
next_offset = int(first_dataset.text['$NEXTDATA'])
|
||||
if next_offset > 0:
|
||||
second_dataset = FlowData('multi.fcs', nextdata_offset=next_offset)
|
||||
```
|
||||
|
||||
## Data Preprocessing
|
||||
|
||||
FlowIO applies standard FCS preprocessing transformations when `preprocess=True`.
|
||||
|
||||
**Preprocessing Steps:**
|
||||
|
||||
1. **Gain Scaling:** Multiply values by PnG (gain) keyword
|
||||
2. **Logarithmic Transformation:** Apply PnE exponential transformation if present
|
||||
- Formula: `value = a * 10^(b * raw_value)` where PnE = "a,b"
|
||||
3. **Time Scaling:** Convert time values to appropriate units
|
||||
|
||||
**Controlling Preprocessing:**
|
||||
|
||||
```python
|
||||
# Preprocessed data (default)
|
||||
preprocessed = flow.as_array(preprocess=True)
|
||||
|
||||
# Raw data (no transformations)
|
||||
raw = flow.as_array(preprocess=False)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Handle common FlowIO exceptions appropriately.
|
||||
|
||||
```python
|
||||
from flowio import (
|
||||
FlowData,
|
||||
FCSParsingError,
|
||||
DataOffsetDiscrepancyError,
|
||||
MultipleDataSetsError
|
||||
)
|
||||
|
||||
try:
|
||||
flow = FlowData('sample.fcs')
|
||||
events = flow.as_array()
|
||||
|
||||
except FCSParsingError as e:
|
||||
print(f"Failed to parse FCS file: {e}")
|
||||
# Try with relaxed parsing
|
||||
flow = FlowData('sample.fcs', ignore_offset_error=True)
|
||||
|
||||
except DataOffsetDiscrepancyError as e:
|
||||
print(f"Offset discrepancy detected: {e}")
|
||||
# Use ignore_offset_discrepancy parameter
|
||||
flow = FlowData('sample.fcs', ignore_offset_discrepancy=True)
|
||||
|
||||
except MultipleDataSetsError as e:
|
||||
print(f"Multiple datasets detected: {e}")
|
||||
# Use read_multiple_data_sets instead
|
||||
from flowio import read_multiple_data_sets
|
||||
datasets = read_multiple_data_sets('sample.fcs')
|
||||
|
||||
except Exception as e:
|
||||
print(f"Unexpected error: {e}")
|
||||
```
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
### Inspecting FCS File Contents
|
||||
|
||||
Quick exploration of FCS file structure:
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
flow = FlowData('unknown.fcs')
|
||||
|
||||
print("=" * 50)
|
||||
print(f"File: {flow.name}")
|
||||
print(f"Version: {flow.version}")
|
||||
print(f"Size: {flow.file_size:,} bytes")
|
||||
print("=" * 50)
|
||||
|
||||
print(f"\nEvents: {flow.event_count:,}")
|
||||
print(f"Channels: {flow.channel_count}")
|
||||
|
||||
print("\nChannel Information:")
|
||||
for i, (pnn, pns) in enumerate(zip(flow.pnn_labels, flow.pns_labels)):
|
||||
ch_type = "scatter" if i in flow.scatter_indices else \
|
||||
"fluoro" if i in flow.fluoro_indices else \
|
||||
"time" if i == flow.time_index else "other"
|
||||
print(f" [{i}] {pnn:10s} | {pns:30s} | {ch_type}")
|
||||
|
||||
print("\nKey Metadata:")
|
||||
for key in ['$DATE', '$BTIM', '$ETIM', '$CYT', '$INST', '$SRC']:
|
||||
value = flow.text.get(key, 'N/A')
|
||||
print(f" {key:15s}: {value}")
|
||||
```
|
||||
|
||||
### Batch Processing Multiple Files
|
||||
|
||||
Process a directory of FCS files:
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from flowio import FlowData
|
||||
import pandas as pd
|
||||
|
||||
# Find all FCS files
|
||||
fcs_files = list(Path('data/').glob('*.fcs'))
|
||||
|
||||
# Extract summary information
|
||||
summaries = []
|
||||
for fcs_path in fcs_files:
|
||||
try:
|
||||
flow = FlowData(str(fcs_path), only_text=True)
|
||||
summaries.append({
|
||||
'filename': fcs_path.name,
|
||||
'version': flow.version,
|
||||
'events': flow.event_count,
|
||||
'channels': flow.channel_count,
|
||||
'date': flow.text.get('$DATE', 'N/A')
|
||||
})
|
||||
except Exception as e:
|
||||
print(f"Error processing {fcs_path.name}: {e}")
|
||||
|
||||
# Create summary DataFrame
|
||||
df = pd.DataFrame(summaries)
|
||||
print(df)
|
||||
```
|
||||
|
||||
### Converting FCS to CSV
|
||||
|
||||
Export event data to CSV format:
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
import pandas as pd
|
||||
|
||||
# Read FCS file
|
||||
flow = FlowData('sample.fcs')
|
||||
|
||||
# Convert to DataFrame
|
||||
df = pd.DataFrame(
|
||||
flow.as_array(),
|
||||
columns=flow.pnn_labels
|
||||
)
|
||||
|
||||
# Add metadata as attributes
|
||||
df.attrs['fcs_version'] = flow.version
|
||||
df.attrs['instrument'] = flow.text.get('$CYT', 'Unknown')
|
||||
|
||||
# Export to CSV
|
||||
df.to_csv('output.csv', index=False)
|
||||
print(f"Exported {len(df)} events to CSV")
|
||||
```
|
||||
|
||||
### Filtering Events and Re-exporting
|
||||
|
||||
Apply filters and save filtered data:
|
||||
|
||||
```python
|
||||
from flowio import FlowData, create_fcs
|
||||
import numpy as np
|
||||
|
||||
# Read original file
|
||||
flow = FlowData('sample.fcs')
|
||||
events = flow.as_array(preprocess=False)
|
||||
|
||||
# Apply filtering (example: threshold on first channel)
|
||||
fsc_idx = 0
|
||||
threshold = 500
|
||||
mask = events[:, fsc_idx] > threshold
|
||||
filtered_events = events[mask]
|
||||
|
||||
print(f"Original events: {len(events)}")
|
||||
print(f"Filtered events: {len(filtered_events)}")
|
||||
|
||||
# Create new FCS file with filtered data
|
||||
create_fcs('filtered.fcs',
|
||||
filtered_events,
|
||||
flow.pnn_labels,
|
||||
opt_channel_names=flow.pns_labels,
|
||||
metadata={**flow.text, '$SRC': 'Filtered data'})
|
||||
```
|
||||
|
||||
### Extracting Specific Channels
|
||||
|
||||
Extract and process specific channels:
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
import numpy as np
|
||||
|
||||
flow = FlowData('sample.fcs')
|
||||
events = flow.as_array()
|
||||
|
||||
# Extract fluorescence channels only
|
||||
fluoro_indices = flow.fluoro_indices
|
||||
fluoro_data = events[:, fluoro_indices]
|
||||
fluoro_names = [flow.pnn_labels[i] for i in fluoro_indices]
|
||||
|
||||
print(f"Fluorescence channels: {fluoro_names}")
|
||||
print(f"Shape: {fluoro_data.shape}")
|
||||
|
||||
# Calculate statistics per channel
|
||||
for i, name in enumerate(fluoro_names):
|
||||
channel_data = fluoro_data[:, i]
|
||||
print(f"\n{name}:")
|
||||
print(f" Mean: {channel_data.mean():.2f}")
|
||||
print(f" Median: {np.median(channel_data):.2f}")
|
||||
print(f" Std Dev: {channel_data.std():.2f}")
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Memory Efficiency:** Use `only_text=True` when event data is not needed
|
||||
2. **Error Handling:** Wrap file operations in try-except blocks for robust code
|
||||
3. **Multi-Dataset Detection:** Check for MultipleDataSetsError and use appropriate function
|
||||
4. **Preprocessing Control:** Explicitly set `preprocess` parameter based on analysis needs
|
||||
5. **Offset Issues:** If parsing fails, try `ignore_offset_discrepancy=True` parameter
|
||||
6. **Channel Validation:** Verify channel counts and names match expectations before processing
|
||||
7. **Metadata Preservation:** When modifying files, preserve original TEXT segment keywords
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
### Understanding FCS File Structure
|
||||
|
||||
FCS files consist of four segments:
|
||||
|
||||
1. **HEADER:** FCS version and byte offsets for other segments
|
||||
2. **TEXT:** Key-value metadata pairs (delimiter-separated)
|
||||
3. **DATA:** Raw event data (binary/float/ASCII format)
|
||||
4. **ANALYSIS** (optional): Results from data processing
|
||||
|
||||
Access these segments via FlowData attributes:
|
||||
- `flow.header` - HEADER segment
|
||||
- `flow.text` - TEXT segment keywords
|
||||
- `flow.events` - DATA segment (as bytes)
|
||||
- `flow.analysis` - ANALYSIS segment keywords (if present)
|
||||
|
||||
### Detailed API Reference
|
||||
|
||||
For comprehensive API documentation including all parameters, methods, exceptions, and FCS keyword reference, consult the detailed reference file:
|
||||
|
||||
**Read:** `references/api_reference.md`
|
||||
|
||||
The reference includes:
|
||||
- Complete FlowData class documentation
|
||||
- All utility functions (read_multiple_data_sets, create_fcs)
|
||||
- Exception classes and handling
|
||||
- FCS file structure details
|
||||
- Common TEXT segment keywords
|
||||
- Extended example workflows
|
||||
|
||||
When working with complex FCS operations or encountering unusual file formats, load this reference for detailed guidance.
|
||||
|
||||
## Integration Notes
|
||||
|
||||
**NumPy Arrays:** All event data is returned as NumPy ndarrays with shape (events, channels)
|
||||
|
||||
**Pandas DataFrames:** Easily convert to DataFrames for analysis:
|
||||
```python
|
||||
import pandas as pd
|
||||
df = pd.DataFrame(flow.as_array(), columns=flow.pnn_labels)
|
||||
```
|
||||
|
||||
**FlowKit Integration:** For advanced analysis (compensation, gating, FlowJo support), use FlowKit library which builds on FlowIO's parsing capabilities
|
||||
|
||||
**Web Applications:** FlowIO's minimal dependencies make it ideal for web backend services processing FCS uploads
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Problem:** "Offset discrepancy error"
|
||||
**Solution:** Use `ignore_offset_discrepancy=True` parameter
|
||||
|
||||
**Problem:** "Multiple datasets error"
|
||||
**Solution:** Use `read_multiple_data_sets()` function instead of FlowData constructor
|
||||
|
||||
**Problem:** Out of memory with large files
|
||||
**Solution:** Use `only_text=True` for metadata-only operations, or process events in chunks
|
||||
|
||||
**Problem:** Unexpected channel counts
|
||||
**Solution:** Check for null channels; use `null_channel_list` parameter to exclude them
|
||||
|
||||
**Problem:** Cannot modify event data in place
|
||||
**Solution:** FlowIO doesn't support direct modification; extract data, modify, then use `create_fcs()` to save
|
||||
|
||||
## Summary
|
||||
|
||||
FlowIO provides essential FCS file handling capabilities for flow cytometry workflows. Use it for parsing, metadata extraction, and file creation. For simple file operations and data extraction, FlowIO is sufficient. For complex analysis including compensation and gating, integrate with FlowKit or other specialized tools.
|
||||
372
skills/flowio/references/api_reference.md
Normal file
372
skills/flowio/references/api_reference.md
Normal file
@@ -0,0 +1,372 @@
|
||||
# FlowIO API Reference
|
||||
|
||||
## Overview
|
||||
|
||||
FlowIO is a Python library for reading and writing Flow Cytometry Standard (FCS) files. It supports FCS versions 2.0, 3.0, and 3.1 with minimal dependencies.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install flowio
|
||||
```
|
||||
|
||||
Supports Python 3.9 and later.
|
||||
|
||||
## Core Classes
|
||||
|
||||
### FlowData
|
||||
|
||||
The primary class for working with FCS files.
|
||||
|
||||
#### Constructor
|
||||
|
||||
```python
|
||||
FlowData(fcs_file,
|
||||
ignore_offset_error=False,
|
||||
ignore_offset_discrepancy=False,
|
||||
use_header_offsets=False,
|
||||
only_text=False,
|
||||
nextdata_offset=None,
|
||||
null_channel_list=None)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `fcs_file`: File path (str), Path object, or file handle
|
||||
- `ignore_offset_error` (bool): Ignore offset errors (default: False)
|
||||
- `ignore_offset_discrepancy` (bool): Ignore offset discrepancies between HEADER and TEXT sections (default: False)
|
||||
- `use_header_offsets` (bool): Use HEADER section offsets instead of TEXT section (default: False)
|
||||
- `only_text` (bool): Only parse the TEXT segment, skip DATA and ANALYSIS (default: False)
|
||||
- `nextdata_offset` (int): Byte offset for reading multi-dataset files
|
||||
- `null_channel_list` (list): List of PnN labels for null channels to exclude
|
||||
|
||||
#### Attributes
|
||||
|
||||
**File Information:**
|
||||
- `name`: Name of the FCS file
|
||||
- `file_size`: Size of the file in bytes
|
||||
- `version`: FCS version (e.g., '3.0', '3.1')
|
||||
- `header`: Dictionary containing HEADER segment information
|
||||
- `data_type`: Type of data format ('I', 'F', 'D', 'A')
|
||||
|
||||
**Channel Information:**
|
||||
- `channel_count`: Number of channels in the dataset
|
||||
- `channels`: Dictionary mapping channel numbers to channel info
|
||||
- `pnn_labels`: List of PnN (short channel name) labels
|
||||
- `pns_labels`: List of PnS (descriptive stain name) labels
|
||||
- `pnr_values`: List of PnR (range) values for each channel
|
||||
- `fluoro_indices`: List of indices for fluorescence channels
|
||||
- `scatter_indices`: List of indices for scatter channels
|
||||
- `time_index`: Index of the time channel (or None)
|
||||
- `null_channels`: List of null channel indices
|
||||
|
||||
**Event Data:**
|
||||
- `event_count`: Number of events (rows) in the dataset
|
||||
- `events`: Raw event data as bytes
|
||||
|
||||
**Metadata:**
|
||||
- `text`: Dictionary of TEXT segment key-value pairs
|
||||
- `analysis`: Dictionary of ANALYSIS segment key-value pairs (if present)
|
||||
|
||||
#### Methods
|
||||
|
||||
##### as_array()
|
||||
|
||||
```python
|
||||
as_array(preprocess=True)
|
||||
```
|
||||
|
||||
Return event data as a 2-D NumPy array.
|
||||
|
||||
**Parameters:**
|
||||
- `preprocess` (bool): Apply gain, logarithmic, and time scaling transformations (default: True)
|
||||
|
||||
**Returns:**
|
||||
- NumPy ndarray with shape (event_count, channel_count)
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
flow_data = FlowData('sample.fcs')
|
||||
events_array = flow_data.as_array() # Preprocessed data
|
||||
raw_array = flow_data.as_array(preprocess=False) # Raw data
|
||||
```
|
||||
|
||||
##### write_fcs()
|
||||
|
||||
```python
|
||||
write_fcs(filename, metadata=None)
|
||||
```
|
||||
|
||||
Export the FlowData instance as a new FCS file.
|
||||
|
||||
**Parameters:**
|
||||
- `filename` (str): Output file path
|
||||
- `metadata` (dict): Optional dictionary of TEXT segment keywords to add/update
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
flow_data = FlowData('sample.fcs')
|
||||
flow_data.write_fcs('output.fcs', metadata={'$SRC': 'Modified data'})
|
||||
```
|
||||
|
||||
**Note:** Exports as FCS 3.1 with single-precision floating-point data.
|
||||
|
||||
## Utility Functions
|
||||
|
||||
### read_multiple_data_sets()
|
||||
|
||||
```python
|
||||
read_multiple_data_sets(fcs_file,
|
||||
ignore_offset_error=False,
|
||||
ignore_offset_discrepancy=False,
|
||||
use_header_offsets=False)
|
||||
```
|
||||
|
||||
Read all datasets from an FCS file containing multiple datasets.
|
||||
|
||||
**Parameters:**
|
||||
- Same as FlowData constructor (except `nextdata_offset`)
|
||||
|
||||
**Returns:**
|
||||
- List of FlowData instances, one for each dataset
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
from flowio import read_multiple_data_sets
|
||||
|
||||
datasets = read_multiple_data_sets('multi_dataset.fcs')
|
||||
print(f"Found {len(datasets)} datasets")
|
||||
for i, dataset in enumerate(datasets):
|
||||
print(f"Dataset {i}: {dataset.event_count} events")
|
||||
```
|
||||
|
||||
### create_fcs()
|
||||
|
||||
```python
|
||||
create_fcs(filename,
|
||||
event_data,
|
||||
channel_names,
|
||||
opt_channel_names=None,
|
||||
metadata=None)
|
||||
```
|
||||
|
||||
Create a new FCS file from event data.
|
||||
|
||||
**Parameters:**
|
||||
- `filename` (str): Output file path
|
||||
- `event_data` (ndarray): 2-D NumPy array of event data (rows=events, columns=channels)
|
||||
- `channel_names` (list): List of PnN (short) channel names
|
||||
- `opt_channel_names` (list): Optional list of PnS (descriptive) channel names
|
||||
- `metadata` (dict): Optional dictionary of TEXT segment keywords
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
import numpy as np
|
||||
from flowio import create_fcs
|
||||
|
||||
# Create synthetic data
|
||||
events = np.random.rand(10000, 5)
|
||||
channels = ['FSC-A', 'SSC-A', 'FL1-A', 'FL2-A', 'Time']
|
||||
opt_channels = ['Forward Scatter', 'Side Scatter', 'FITC', 'PE', 'Time']
|
||||
|
||||
create_fcs('synthetic.fcs',
|
||||
events,
|
||||
channels,
|
||||
opt_channel_names=opt_channels,
|
||||
metadata={'$SRC': 'Synthetic data'})
|
||||
```
|
||||
|
||||
## Exception Classes
|
||||
|
||||
### FlowIOWarning
|
||||
|
||||
Generic warning class for non-critical issues.
|
||||
|
||||
### PnEWarning
|
||||
|
||||
Warning raised when PnE values are invalid during FCS file creation.
|
||||
|
||||
### FlowIOException
|
||||
|
||||
Base exception class for FlowIO errors.
|
||||
|
||||
### FCSParsingError
|
||||
|
||||
Raised when there are issues parsing an FCS file.
|
||||
|
||||
### DataOffsetDiscrepancyError
|
||||
|
||||
Raised when the HEADER and TEXT sections provide different byte offsets for data segments.
|
||||
|
||||
**Workaround:** Use `ignore_offset_discrepancy=True` parameter when creating FlowData instance.
|
||||
|
||||
### MultipleDataSetsError
|
||||
|
||||
Raised when attempting to read a file with multiple datasets using the standard FlowData constructor.
|
||||
|
||||
**Solution:** Use `read_multiple_data_sets()` function instead.
|
||||
|
||||
## FCS File Structure Reference
|
||||
|
||||
FCS files consist of four segments:
|
||||
|
||||
1. **HEADER**: Contains FCS version and byte locations of other segments
|
||||
2. **TEXT**: Key-value metadata pairs (delimited format)
|
||||
3. **DATA**: Raw event data (binary, floating-point, or ASCII)
|
||||
4. **ANALYSIS** (optional): Results from data processing
|
||||
|
||||
### Common TEXT Segment Keywords
|
||||
|
||||
- `$BEGINDATA`, `$ENDDATA`: Byte offsets for DATA segment
|
||||
- `$BEGINANALYSIS`, `$ENDANALYSIS`: Byte offsets for ANALYSIS segment
|
||||
- `$BYTEORD`: Byte order (1,2,3,4 for little-endian; 4,3,2,1 for big-endian)
|
||||
- `$DATATYPE`: Data type ('I'=integer, 'F'=float, 'D'=double, 'A'=ASCII)
|
||||
- `$MODE`: Data mode ('L'=list mode, most common)
|
||||
- `$NEXTDATA`: Offset to next dataset (0 if single dataset)
|
||||
- `$PAR`: Number of parameters (channels)
|
||||
- `$TOT`: Total number of events
|
||||
- `PnN`: Short name for parameter n
|
||||
- `PnS`: Descriptive stain name for parameter n
|
||||
- `PnR`: Range (max value) for parameter n
|
||||
- `PnE`: Amplification exponent for parameter n (format: "a,b" where value = a * 10^(b*x))
|
||||
- `PnG`: Amplification gain for parameter n
|
||||
|
||||
## Channel Types
|
||||
|
||||
FlowIO automatically categorizes channels:
|
||||
|
||||
- **Scatter channels**: FSC (forward scatter), SSC (side scatter)
|
||||
- **Fluorescence channels**: FL1, FL2, FITC, PE, etc.
|
||||
- **Time channel**: Usually labeled "Time"
|
||||
|
||||
Access indices via:
|
||||
- `flow_data.scatter_indices`
|
||||
- `flow_data.fluoro_indices`
|
||||
- `flow_data.time_index`
|
||||
|
||||
## Data Preprocessing
|
||||
|
||||
When calling `as_array(preprocess=True)`, FlowIO applies:
|
||||
|
||||
1. **Gain scaling**: Multiply by PnG value
|
||||
2. **Logarithmic transformation**: Apply PnE exponential transformation if present
|
||||
3. **Time scaling**: Convert time values to appropriate units
|
||||
|
||||
To access raw, unprocessed data: `as_array(preprocess=False)`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Memory efficiency**: Use `only_text=True` when only metadata is needed
|
||||
2. **Error handling**: Wrap file operations in try-except blocks for FCSParsingError
|
||||
3. **Multi-dataset files**: Always use `read_multiple_data_sets()` if unsure about dataset count
|
||||
4. **Offset issues**: If encountering offset errors, try `ignore_offset_discrepancy=True`
|
||||
5. **Channel selection**: Use null_channel_list to exclude unwanted channels during parsing
|
||||
|
||||
## Integration with FlowKit
|
||||
|
||||
For advanced flow cytometry analysis including compensation, gating, and GatingML support, consider using FlowKit library alongside FlowIO. FlowKit provides higher-level abstractions built on top of FlowIO's file parsing capabilities.
|
||||
|
||||
## Example Workflows
|
||||
|
||||
### Basic File Reading
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
# Read FCS file
|
||||
flow = FlowData('experiment.fcs')
|
||||
|
||||
# Print basic info
|
||||
print(f"Version: {flow.version}")
|
||||
print(f"Events: {flow.event_count}")
|
||||
print(f"Channels: {flow.channel_count}")
|
||||
print(f"Channel names: {flow.pnn_labels}")
|
||||
|
||||
# Get event data
|
||||
events = flow.as_array()
|
||||
print(f"Data shape: {events.shape}")
|
||||
```
|
||||
|
||||
### Metadata Extraction
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
flow = FlowData('sample.fcs', only_text=True)
|
||||
|
||||
# Access metadata
|
||||
print(f"Acquisition date: {flow.text.get('$DATE', 'N/A')}")
|
||||
print(f"Instrument: {flow.text.get('$CYT', 'N/A')}")
|
||||
|
||||
# Channel information
|
||||
for i, (pnn, pns) in enumerate(zip(flow.pnn_labels, flow.pns_labels)):
|
||||
print(f"Channel {i}: {pnn} ({pns})")
|
||||
```
|
||||
|
||||
### Creating New FCS Files
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
from flowio import create_fcs
|
||||
|
||||
# Generate or process data
|
||||
data = np.random.rand(5000, 3) * 1000
|
||||
|
||||
# Define channels
|
||||
channels = ['FSC-A', 'SSC-A', 'FL1-A']
|
||||
stains = ['Forward Scatter', 'Side Scatter', 'GFP']
|
||||
|
||||
# Create FCS file
|
||||
create_fcs('output.fcs',
|
||||
data,
|
||||
channels,
|
||||
opt_channel_names=stains,
|
||||
metadata={
|
||||
'$SRC': 'Python script',
|
||||
'$DATE': '19-OCT-2025'
|
||||
})
|
||||
```
|
||||
|
||||
### Processing Multi-Dataset Files
|
||||
|
||||
```python
|
||||
from flowio import read_multiple_data_sets
|
||||
|
||||
# Read all datasets
|
||||
datasets = read_multiple_data_sets('multi.fcs')
|
||||
|
||||
# Process each dataset
|
||||
for i, dataset in enumerate(datasets):
|
||||
print(f"\nDataset {i}:")
|
||||
print(f" Events: {dataset.event_count}")
|
||||
print(f" Channels: {dataset.pnn_labels}")
|
||||
|
||||
# Get data array
|
||||
events = dataset.as_array()
|
||||
mean_values = events.mean(axis=0)
|
||||
print(f" Mean values: {mean_values}")
|
||||
```
|
||||
|
||||
### Modifying and Re-exporting
|
||||
|
||||
```python
|
||||
from flowio import FlowData
|
||||
|
||||
# Read original file
|
||||
flow = FlowData('original.fcs')
|
||||
|
||||
# Get event data
|
||||
events = flow.as_array(preprocess=False)
|
||||
|
||||
# Modify data (example: apply custom transformation)
|
||||
events[:, 0] = events[:, 0] * 1.5 # Scale first channel
|
||||
|
||||
# Note: Currently, FlowIO doesn't support direct modification of event data
|
||||
# For modifications, use create_fcs() instead:
|
||||
from flowio import create_fcs
|
||||
|
||||
create_fcs('modified.fcs',
|
||||
events,
|
||||
flow.pnn_labels,
|
||||
opt_channel_names=flow.pns_labels,
|
||||
metadata=flow.text)
|
||||
```
|
||||
Reference in New Issue
Block a user