Microsoft Semantic Kernel
Master Semantic Kernel for building enterprise-grade AI agents with plugin architecture
Your Progress
0 / 5 completedProduction-Ready Enterprise Features
Semantic Kernel is built for enterprise production workloads with comprehensive logging, telemetry, security, and native Azure integration. Deploy AI agents with confidence.
📊 Logging & Observability
Built-in Logging with ILogger
using Microsoft.Extensions.Logging;
// Configure logging
var loggerFactory = LoggerFactory.Create(builder =>
{
builder.AddConsole();
builder.AddApplicationInsights(instrumentationKey);
builder.SetMinimumLevel(LogLevel.Information);
});
// Kernel automatically logs:
// - Function invocations and results
// - Planner decisions and reasoning
// - LLM calls and token usage
// - Errors and retries
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(endpoint, apiKey, model)
.Build(loggerFactory);🔍 What Gets Logged
- • Function execution traces
- • LLM requests/responses
- • Token usage per call
- • Planner reasoning steps
- • Memory operations
- • Errors with stack traces
📈 Application Insights
- • Request telemetry dashboards
- • Performance metrics
- • Dependency tracking
- • Custom events & metrics
- • Alert configuration
- • Cost analysis
🔐 Security & Managed Identity
Azure Managed Identity (No API Keys!)
using Azure.Identity;
// No hardcoded secrets - use Managed Identity
var credential = new DefaultAzureCredential();
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
endpoint: "https://your-openai.openai.azure.com/",
credential: credential, // ← Automatic Azure AD authentication
modelId: "gpt-4"
)
.Build();
// Also works for Key Vault, Storage, Cosmos DB, etc.
// Identity is managed by Azure, no secrets in codeAzure Key Vault Integration
using Azure.Security.KeyVault.Secrets;
// Store secrets in Key Vault
var client = new SecretClient(
new Uri("https://your-vault.vault.azure.net/"),
new DefaultAzureCredential()
);
var apiKey = await client.GetSecretAsync("OpenAI-ApiKey");
var kernel = Kernel.CreateBuilder()
.AddOpenAIChatCompletion(
modelId: "gpt-4",
apiKey: apiKey.Value.Value // ← From Key Vault, not code
)
.Build();☁️ Native Azure Integration
🤖 Azure OpenAI Service
First-class support for Azure OpenAI with enterprise SLAs
kernel.AddAzureOpenAIChatCompletion(endpoint, credential, model)🔍 Azure Cognitive Search
Semantic memory with enterprise-grade vector search
memory.WithAzureCognitiveSearchMemoryStore(endpoint, credential)🗄️ Azure Cosmos DB
Persist kernel state and conversation history globally
services.AddSingleton<IKernelMemoryStore>(new CosmosDbStore(...))⚡ Resilience & Retry Policies
Polly Integration for Retries
using Polly;
// Configure retry with exponential backoff
var retryPolicy = Policy
.Handle<HttpRequestException>()
.WaitAndRetryAsync(3, retryAttempt =>
TimeSpan.FromSeconds(Math.Pow(2, retryAttempt))
);
// Apply to kernel HTTP client
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
endpoint,
apiKey,
model,
httpClient: new HttpClient(
new PolicyHttpMessageHandler(retryPolicy)
)
)
.Build();
// Automatically retries on transient failures (rate limits, timeouts)💰 Cost Management
Track Token Usage
var result = await kernel
.InvokeAsync(...);
// Access token metadata
var usage = result
.Metadata?["Usage"];
Console.WriteLine(
$"Tokens: {usage}"
);Set Token Limits
var settings = new() {
MaxTokens = 500,
Temperature = 0.7
};
await kernel.InvokeAsync(
...,
settings
);🏢 Enterprise Deployment Checklist
- ✓Use Managed Identity - No API keys in code or config files
- ✓Enable Application Insights - Monitor performance, costs, and errors
- ✓Configure retry policies - Handle rate limits and transient failures
- ✓Set token limits - Prevent runaway costs from LLM calls
- ✓Use Key Vault - Centralize secret management
- ✓Test at scale - Load test with realistic workloads