Quantum Minds Code Operators
Introduction
Code operators in Quantum Minds provide the ability to execute custom code and generate programmatic solutions within your minds. These operators enable you to implement complex algorithms, custom data processing, and specialized logic that goes beyond the capabilities of standard operators, allowing for maximum flexibility and customization.
Available Code Operators
Operator | Description | Common Use Cases |
---|---|---|
Code.Python.Execute | Executes custom Python code | Data processing, algorithms, external integrations |
Code.Python.Generate | Generates Python code from natural language | Automation scripts, data transformation, utilities |
Code.Python.Execute (CustomPython)
The Code.Python.Execute operator (formerly CustomPython) runs custom Python code within the Quantum Minds environment, providing powerful capabilities for data manipulation, algorithmic processing, and integration.
Inputs
Parameter | Type | Required | Description |
---|---|---|---|
code | string | Yes | Python code to be executed |
args | string | No | Arguments to pass to the code |
trigger | string | No | Optional control signal |
Outputs
Parameter | Type | Description |
---|---|---|
type | string | Output format (markdown) |
content | string | Formatted output for display |
result | string | Return value from the code execution |
logs | string | Captured print statements and logs |
error | string | Error messages if execution fails |
Example Usage
Code: """
import pandas as pd
import numpy as np
# Parse input data
data = [{'date': '2023-01-01', 'value': 10}, {'date': '2023-01-02', 'value': 15}]
df = pd.DataFrame(data)
# Convert date column to datetime
df['date'] = pd.to_datetime(df['date'])
# Calculate statistics
stats = {
'mean': df['value'].mean(),
'median': df['value'].median(),
'std': df['value'].std(),
'min': df['value'].min(),
'max': df['value'].max()
}
# Add moving average
df['moving_avg'] = df['value'].rolling(window=2).mean()
# Return results
print("Processing complete")
return {'statistics': stats, 'processed_data': df.to_dict(orient='records')}
"""
Output:
- content: Formatted results of the code execution
- result: {'statistics': {'mean': 12.5, ...}, 'processed_data': [...]}
- logs: "Processing complete"
- error: ""
Available Libraries
The Code.Python.Execute environment includes common data science and utility libraries:
Category | Libraries |
---|---|
Data Analysis | pandas, numpy, scipy |
Visualization | matplotlib, seaborn |
Machine Learning | scikit-learn |
Natural Language | nltk, spacy |
Utilities | requests, json, re, datetime |
Mathematics | math, statistics |
File Processing | csv, io |
Best Practices
- Structure your code with clear sections
- Include comments for complex logic
- Handle exceptions with try/except blocks
- Use print statements for logging
- Return structured results as dictionaries
- Keep computations reasonable in size
- Validate inputs before processing
Security Considerations
The Code.Python.Execute operator runs in a sandboxed environment with:
- Limited execution time
- Memory restrictions
- Network access limitations
- File system restrictions
- Library whitelisting
Code.Python.Generate (PythonCodeGenerator)
The Code.Python.Generate operator (formerly PythonCodeGenerator) creates Python code based on natural language descriptions, enabling non-developers to leverage Python's capabilities.
Inputs
Parameter | Type | Required | Description |
---|---|---|---|
prompt | string | Yes | Description of the code to generate |
trigger | string | No | Optional control signal |
Outputs
Parameter | Type | Description |
---|---|---|
type | string | Output format (markdown) |
content | string | Generated Python code with explanations |
Example Usage
Prompt: "Generate a Python function that takes a CSV file path as input, reads the data, filters rows where the 'status' column equals 'active', calculates the average value of the 'revenue' column for those active rows, and returns both the filtered dataframe and the average revenue"
Output: Complete Python function with documentation, error handling, and explanations of each step
Generation Capabilities
The Code.Python.Generate operator can create:
- Data processing functions
- Analysis algorithms
- Visualization code
- Utility functions
- API interaction scripts
- File processing routines
- Data transformation logic
Best Practices
- Be specific about input and output requirements
- Describe the steps or algorithm clearly
- Mention error handling needs
- Specify performance considerations
- Include example input/output if helpful
- Consider readability requirements
Combining Code Operators
A powerful pattern is to use Code.Python.Generate to create code and then execute it with Code.Python.Execute:
{
"operator": "Code.Python.Generate",
"input": {
"prompt": "Write Python code that analyzes a time series for seasonality, trend, and outliers using the STL decomposition method",
"trigger": "code_generation"
}
}
↓
{
"operator": "Flow.Condition",
"input": {
"prompt": "Check if the generated code in $Code.Python.Generate_001.output.content is complete and has no syntax errors",
"trigger": "code_validation"
}
}
then_ →
{
"operator": "Code.Python.Execute",
"input": {
"code": "$Code.Python.Generate_001.output.content",
"args": "{ \"data\": $SQLExecution_001.output.content, \"date_column\": \"transaction_date\", \"value_column\": \"amount\" }",
"trigger": "code_execution"
}
}
else_ →
{
"operator": "TextSummarize",
"input": {
"prompt": "The generated code has issues and needs revision. Please review and try again.",
"trigger": "error_notification"
}
}
Advanced Code Usage Patterns
Data Enrichment Pipeline
{
"operator": "SQLExecution",
"input": {
"sql": "SELECT * FROM customers LIMIT 100",
"dataset": "customer_database",
"trigger": "data_retrieval"
}
}
↓
{
"operator": "Code.Python.Execute",
"input": {
"code": """
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
# Convert input to dataframe
data = $SQLExecution_001.output.content
df = pd.DataFrame(data)
# Feature engineering
df['total_spend'] = df['lifetime_value']
df['activity_score'] = df['purchase_count'] / df['account_age_days']
df['recency_days'] = (pd.Timestamp.now() - pd.to_datetime(df['last_purchase_date'])).dt.days
# Clustering
features = df[['total_spend', 'activity_score', 'recency_days']].values
kmeans = KMeans(n_clusters=4, random_state=42)
df['segment'] = kmeans.fit_predict(features)
# Map segments to meaningful labels
segment_mapping = {
0: 'High Value',
1: 'New Potential',
2: 'At Risk',
3: 'Low Engagement'
}
df['segment_name'] = df['segment'].map(segment_mapping)
# Return enriched data
return df.to_dict(orient='records')
""",
"trigger": "customer_segmentation"
}
}
↓
{
"operator": "TableToGraph",
"input": {
"prompt": "Create a bubble chart showing customer segments, with total_spend on the x-axis, activity_score on the y-axis, recency_days as the bubble size, and segment_name as the color",
"dataframe": "$Code.Python.Execute_001.output.result",
"trigger": "visualization_creation"
}
}
Custom Algorithm Implementation
{
"operator": "Code.Python.Execute",
"input": {
"code": """
import pandas as pd
import numpy as np
from scipy import stats
def detect_anomalies(data, value_column, window_size=10, sigma=3):
\"\"\"
Implements a moving window anomaly detection algorithm
\"\"\"
df = pd.DataFrame(data)
# Ensure data is sorted by date
if 'date' in df.columns:
df['date'] = pd.to_datetime(df['date'])
df = df.sort_values('date')
# Calculate rolling statistics
df['rolling_mean'] = df[value_column].rolling(window=window_size).mean()
df['rolling_std'] = df[value_column].rolling(window=window_size).std()
# Define boundaries
df['upper_bound'] = df['rolling_mean'] + (sigma * df['rolling_std'])
df['lower_bound'] = df['rolling_mean'] - (sigma * df['rolling_std'])
# Flag anomalies
df['is_anomaly'] = (df[value_column] > df['upper_bound']) | (df[value_column] < df['lower_bound'])
# Z-score for severity
df['z_score'] = np.abs((df[value_column] - df['rolling_mean']) / df['rolling_std'])
# Fill NaN values from initial window
df = df.fillna(method='bfill')
return df.to_dict(orient='records')
# Process input data
result = detect_anomalies(
$SQLExecution_001.output.content,
value_column='daily_sales',
window_size=7,
sigma=2.5
)
return result
""",
"trigger": "anomaly_detection"
}
}
External API Integration
{
"operator": "Code.Python.Execute",
"input": {
"code": """
import requests
import json
import pandas as pd
from datetime import datetime, timedelta
# Configuration
API_KEY = "your_api_key" # In production, use secrets management
BASE_URL = "https://api.example.com/v1"
# Prepare parameters
today = datetime.now()
start_date = (today - timedelta(days=30)).strftime("%Y-%m-%d")
end_date = today.strftime("%Y-%m-%d")
# API request
try:
response = requests.get(
f"{BASE_URL}/market-data",
params={
"symbol": "AAPL,MSFT,GOOGL",
"start_date": start_date,
"end_date": end_date,
"interval": "daily"
},
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
timeout=10
)
response.raise_for_status() # Raise exception for HTTP errors
data = response.json()
# Transform API response to dataframe
results = []
for symbol, prices in data['data'].items():
for price in prices:
results.append({
'symbol': symbol,
'date': price['date'],
'open': price['open'],
'high': price['high'],
'low': price['low'],
'close': price['close'],
'volume': price['volume']
})
df = pd.DataFrame(results)
# Calculate additional metrics
df['date'] = pd.to_datetime(df['date'])
df['daily_return'] = df.groupby('symbol')['close'].pct_change() * 100
df['moving_avg_5d'] = df.groupby('symbol')['close'].rolling(window=5).mean().reset_index(0, drop=True)
print(f"Successfully retrieved market data for {len(data['data'])} symbols")
return df.to_dict(orient='records')
except Exception as e:
print(f"Error retrieving market data: {str(e)}")
return {"error": str(e)}
""",
"trigger": "market_data_retrieval"
}
}
Integration with Other Operators
Enhancing SQL Analysis
{
"operator": "TextToSQL",
"input": {
"prompt": "Get monthly sales totals for the past year by product category",
"dataset": "sales_database",
"trigger": "sales_query"
}
}
↓
{
"operator": "SQLExecution",
"input": {
"sql": "$TextToSQL_001.output.content",
"dataset": "sales_database",
"trigger": "sales_data_retrieval"
}
}
↓
{
"operator": "Code.Python.Execute",
"input": {
"code": """
import pandas as pd
import numpy as np
from statsmodels.tsa.seasonal import seasonal_decompose
# Process sales data
data = $SQLExecution_001.output.content
df = pd.DataFrame(data)
# Prepare time series
df['month'] = pd.to_datetime(df['month'])
df = df.set_index('month')
df = df.sort_index()
# Create pivot table for categories
pivot = df.pivot_table(
index='month',
columns='category',
values='sales_amount',
aggfunc='sum'
).fillna(0)
# Perform seasonal decomposition for each category
results = {}
for category in pivot.columns:
if len(pivot[category]) >= 12: # Need at least a year of data
decomposition = seasonal_decompose(
pivot[category],
model='multiplicative',
period=12
)
results[category] = {
'trend': decomposition.trend.dropna().to_dict(),
'seasonal': decomposition.seasonal.dropna().to_dict(),
'residual': decomposition.resid.dropna().to_dict()
}
# Calculate year-over-year growth
pivot['total'] = pivot.sum(axis=1)
yoy_growth = pivot['total'].pct_change(periods=12) * 100
growth_data = yoy_growth.dropna().to_dict()
return {
'raw_data': df.reset_index().to_dict(orient='records'),
'decomposition': results,
'yoy_growth': growth_data
}
""",
"trigger": "seasonal_analysis"
}
}
↓
{
"operator": "TableToGraph",
"input": {
"prompt": "Create a line chart showing the trend components for each product category",
"dataframe": "$Code.Python.Execute_001.output.result.decomposition",
"trigger": "trend_visualization"
}
}
Text Processing Enhancement
{
"operator": "RAGSummarize",
"input": {
"prompt": "Extract all numerical data and statistics from our annual reports",
"collection": "financial_reports",
"trigger": "data_extraction"
}
}
↓
{
"operator": "Code.Python.Execute",
"input": {
"code": """
import re
import pandas as pd
import json
# Extract text content
text = $RAGSummarize_001.output.content
# Regular expressions for different numerical formats
patterns = {
'percentages': r'(\d+\.?\d*)%',
'currency_millions': r'\$(\d+\.?\d*)\s*million',
'currency_billions': r'\$(\d+\.?\d*)\s*billion',
'currency_simple': r'\$(\d+(?:,\d{3})*(?:\.\d+)?)',
'simple_numbers': r'(\d+(?:,\d{3})*(?:\.\d+)?)\s+(?:customers|users|employees|transactions)'
}
# Extract matches
results = {}
for key, pattern in patterns.items():
matches = re.findall(pattern, text)
results[key] = matches
# Process currency values to standardized format
standardized_values = []
if results['currency_simple']:
for value in results['currency_simple']:
standardized_values.append({
'type': 'currency',
'raw_value': value,
'numeric_value': float(value.replace(',', '')),
'unit': 'USD'
})
if results['currency_millions']:
for value in results['currency_millions']:
standardized_values.append({
'type': 'currency',
'raw_value': value + ' million',
'numeric_value': float(value) * 1000000,
'unit': 'USD'
})
if results['currency_billions']:
for value in results['currency_billions']:
standardized_values.append({
'type': 'currency',
'raw_value': value + ' billion',
'numeric_value': float(value) * 1000000000,
'unit': 'USD'
})
# Process percentages
if results['percentages']:
for value in results['percentages']:
standardized_values.append({
'type': 'percentage',
'raw_value': value + '%',
'numeric_value': float(value),
'unit': 'percent'
})
# Convert to dataframe
df = pd.DataFrame(standardized_values)
# Sort by value
if not df.empty:
df = df.sort_values('numeric_value', ascending=False)
print(f"Extracted {len(standardized_values)} numerical values from the report")
return df.to_dict(orient='records')
""",
"trigger": "numerical_extraction"
}
}
Best Practices for Code Operators
Writing Effective Code
- Follow Python best practices and conventions
- Structure code with functions for readability
- Include error handling for robustness
- Add comments for complex sections
- Use appropriate libraries for the task
- Optimize for performance when necessary
- Write modular, reusable code
Prompt Engineering for Code Generation
- Be specific about functionality requirements
- Describe inputs and outputs clearly
- Specify error handling needs
- Mention performance considerations
- Include examples if helpful
- Request documentation when important
Security and Safety
- Avoid hardcoding sensitive information
- Validate all inputs before processing
- Implement proper error handling
- Be mindful of resource usage
- Consider potential security implications
- Follow least privilege principles
Testing and Debugging
- Start with simple code and add complexity incrementally
- Use print statements for debugging
- Validate output structure and types
- Test edge cases and boundary conditions
- Consider error states and recovery
Next Steps
Explore how Code Operators can be combined with UI Operators to create interactive experiences and visualizations based on your custom code.
Overview | Operator Categories | Flow Operators | UI Operators