Comprehensive Guide to SQL Server Table Sizes: Modern Methods & Alternative Approaches (2016-2024)

14 minute read

Introduction

SQL Server has evolved significantly since 2010, introducing powerful new features for monitoring table sizes and database storage. This comprehensive guide covers traditional methods alongside cutting-edge approaches available in modern SQL Server versions (2016-2024), including Azure SQL Database optimizations.

Whether you’re managing on-premises SQL Server instances, Azure SQL databases, or hybrid environments, this guide provides the most efficient and feature-rich methods for table size analysis.

What’s New in Modern SQL Server Versions

SQL Server 2016+ Features

  • sys.dm_db_file_space_usage: Enhanced file space monitoring
  • Query Store integration: Historical size tracking
  • Temporal table considerations: System-versioned table sizing

SQL Server 2017+ Features

  • Graph database tables: Specialized sizing for graph workloads
  • Adaptive query processing: Better performance for size queries

SQL Server 2019+ Features

  • Intelligent Query Processing: Optimized execution for complex size queries
  • Memory-optimized tempdb metadata: Faster system view queries
  • Accelerated Database Recovery: Impact on log space calculations

SQL Server 2022+ Features

  • Intelligent Query Processing enhancements: Parameter Sensitive Plan optimization
  • Ledger tables: Additional considerations for blockchain-verified tables
  • Query Store for secondary replicas: Better monitoring in Always On scenarios

Method 1: Modern Dynamic Management Functions (SQL Server 2016+)

-- Most efficient method for SQL Server 2016+
SELECT 
    OBJECT_SCHEMA_NAME(ps.object_id) AS SchemaName,
    OBJECT_NAME(ps.object_id) AS TableName,
    SUM(ps.row_count) AS TotalRows,
    SUM(ps.reserved_page_count) * 8.0 / 1024 AS ReservedSpaceMB,
    SUM(ps.used_page_count) * 8.0 / 1024 AS UsedSpaceMB,
    SUM(ps.in_row_data_page_count) * 8.0 / 1024 AS InRowDataMB,
    SUM(ps.lob_used_page_count) * 8.0 / 1024 AS LobDataMB,
    SUM(ps.row_overflow_used_page_count) * 8.0 / 1024 AS RowOverflowMB,
    -- Calculate compression ratio if available
    CASE 
        WHEN SUM(ps.reserved_page_count) > 0 
        THEN (SUM(ps.used_page_count) * 100.0) / SUM(ps.reserved_page_count)
        ELSE 0 
    END AS CompressionRatio,
    -- New in 2016+: More detailed page type breakdown
    SUM(ps.in_row_used_page_count) * 8.0 / 1024 AS InRowUsedMB,
    SUM(ps.in_row_reserved_page_count) * 8.0 / 1024 AS InRowReservedMB
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.tables t ON ps.object_id = t.object_id
WHERE t.type = 'U'
    AND ps.index_id < 2  -- Clustered index or heap only
GROUP BY ps.object_id
ORDER BY SUM(ps.reserved_page_count) DESC;

Enhanced Query with Temporal Table Support (SQL Server 2016+)

-- Advanced query including temporal table analysis
WITH ModernTableSizes AS (
    SELECT 
        OBJECT_SCHEMA_NAME(t.object_id) AS SchemaName,
        t.name AS TableName,
        t.temporal_type_desc AS TemporalType,
        t.history_table_id,
        CASE WHEN t.temporal_type = 2 THEN OBJECT_NAME(t.history_table_id) END AS HistoryTableName,
        SUM(ps.row_count) AS CurrentRows,
        SUM(ps.reserved_page_count) * 8.0 / 1024 AS CurrentSizeMB,
        SUM(ps.used_page_count) * 8.0 / 1024 AS CurrentUsedMB,
        -- Memory-optimized table detection (2014+, enhanced 2016+)
        t.durability_desc,
        t.is_memory_optimized
    FROM sys.tables t
    INNER JOIN sys.dm_db_partition_stats ps ON t.object_id = ps.object_id
    WHERE t.type = 'U'
        AND ps.index_id < 2
    GROUP BY t.object_id, t.name, t.temporal_type_desc, t.history_table_id, 
             t.durability_desc, t.is_memory_optimized, OBJECT_SCHEMA_NAME(t.object_id)
),
HistoryTableSizes AS (
    SELECT 
        h.object_id AS HistoryTableId,
        SUM(ps.row_count) AS HistoryRows,
        SUM(ps.reserved_page_count) * 8.0 / 1024 AS HistorySizeMB
    FROM sys.tables h
    INNER JOIN sys.dm_db_partition_stats ps ON h.object_id = ps.object_id
    WHERE h.temporal_type = 1  -- History table
        AND ps.index_id < 2
    GROUP BY h.object_id
)
SELECT 
    mts.SchemaName,
    mts.TableName,
    mts.TemporalType,
    mts.HistoryTableName,
    FORMAT(mts.CurrentRows, 'N0') AS [Current Rows],
    FORMAT(mts.CurrentSizeMB, 'N2') + ' MB' AS [Current Size],
    FORMAT(ISNULL(hts.HistoryRows, 0), 'N0') AS [History Rows],
    FORMAT(ISNULL(hts.HistorySizeMB, 0), 'N2') + ' MB' AS [History Size],
    FORMAT(mts.CurrentSizeMB + ISNULL(hts.HistorySizeMB, 0), 'N2') + ' MB' AS [Total Size],
    mts.durability_desc AS [Durability],
    CASE WHEN mts.is_memory_optimized = 1 THEN 'Yes' ELSE 'No' END AS [Memory Optimized],
    -- Calculate growth potential for temporal tables
    CASE 
        WHEN mts.TemporalType = 'SYSTEM_VERSIONED_TEMPORAL_TABLE' AND ISNULL(hts.HistoryRows, 0) > 0
        THEN FORMAT((CAST(hts.HistorySizeMB AS FLOAT) / mts.CurrentSizeMB) * 100, 'N1') + '%'
        ELSE 'N/A'
    END AS [History Ratio]
FROM ModernTableSizes mts
LEFT JOIN HistoryTableSizes hts ON mts.history_table_id = hts.HistoryTableId
ORDER BY (mts.CurrentSizeMB + ISNULL(hts.HistorySizeMB, 0)) DESC;

Method 2: JSON Integration & Modern Formatting (SQL Server 2016+)

Table Size Report as JSON

-- Generate JSON report for modern applications
SELECT 
    (
        SELECT 
            OBJECT_SCHEMA_NAME(ps.object_id) AS schema_name,
            OBJECT_NAME(ps.object_id) AS table_name,
            SUM(ps.row_count) AS row_count,
            ROUND(SUM(ps.reserved_page_count) * 8.0 / 1024, 2) AS reserved_mb,
            ROUND(SUM(ps.used_page_count) * 8.0 / 1024, 2) AS used_mb,
            ROUND(SUM(ps.in_row_data_page_count) * 8.0 / 1024, 2) AS data_mb,
            ROUND(SUM(ps.lob_used_page_count) * 8.0 / 1024, 2) AS lob_mb,
            GETDATE() AS report_timestamp,
            -- Add compression info for 2016+
            (SELECT 
                data_compression_desc 
             FROM sys.partitions p 
             WHERE p.object_id = ps.object_id 
                AND p.index_id = 0 
                AND p.partition_number = 1
            ) AS compression_type
        FROM sys.dm_db_partition_stats ps
        INNER JOIN sys.tables t ON ps.object_id = t.object_id
        WHERE t.type = 'U' AND ps.index_id < 2
        GROUP BY ps.object_id
        ORDER BY SUM(ps.reserved_page_count) DESC
        FOR JSON PATH, ROOT('table_sizes')
    ) AS TableSizeReport;

Modern Formatted Output with STRING_AGG (SQL Server 2017+)

-- Using STRING_AGG for summary reporting (SQL Server 2017+)
WITH SizeCategories AS (
    SELECT 
        OBJECT_SCHEMA_NAME(ps.object_id) AS SchemaName,
        OBJECT_NAME(ps.object_id) AS TableName,
        SUM(ps.reserved_page_count) * 8.0 / 1024 AS SizeMB,
        CASE 
            WHEN SUM(ps.reserved_page_count) * 8.0 / 1024 >= 1000 THEN 'Large (>1GB)'
            WHEN SUM(ps.reserved_page_count) * 8.0 / 1024 >= 100 THEN 'Medium (100MB-1GB)'
            WHEN SUM(ps.reserved_page_count) * 8.0 / 1024 >= 10 THEN 'Small (10-100MB)'
            ELSE 'Tiny (<10MB)'
        END AS SizeCategory
    FROM sys.dm_db_partition_stats ps
    INNER JOIN sys.tables t ON ps.object_id = t.object_id
    WHERE t.type = 'U' AND ps.index_id < 2
    GROUP BY ps.object_id
)
SELECT 
    SizeCategory,
    COUNT(*) AS TableCount,
    FORMAT(SUM(SizeMB), 'N2') + ' MB' AS TotalSize,
    STRING_AGG(SchemaName + '.' + TableName, ', ') WITHIN GROUP (ORDER BY SizeMB DESC) AS TopTables
FROM SizeCategories
GROUP BY SizeCategory
ORDER BY 
    CASE SizeCategory
        WHEN 'Large (>1GB)' THEN 1
        WHEN 'Medium (100MB-1GB)' THEN 2
        WHEN 'Small (10-100MB)' THEN 3
        ELSE 4
    END;

Method 3: Memory-Optimized Table Analysis (SQL Server 2014+, Enhanced 2016+)

-- Specialized query for memory-optimized tables
WITH MemoryOptimizedAnalysis AS (
    SELECT 
        OBJECT_SCHEMA_NAME(t.object_id) AS SchemaName,
        t.name AS TableName,
        t.durability_desc,
        SUM(ms.memory_used_by_table_kb) / 1024.0 AS MemoryUsedMB,
        SUM(ms.memory_used_by_indexes_kb) / 1024.0 AS IndexMemoryMB,
        SUM(ps.row_count) AS RowCount,
        -- Calculate memory per row
        CASE 
            WHEN SUM(ps.row_count) > 0 
            THEN (SUM(ms.memory_used_by_table_kb) * 1024.0) / SUM(ps.row_count)
            ELSE 0
        END AS MemoryPerRowBytes
    FROM sys.tables t
    INNER JOIN sys.dm_db_xtp_table_memory_stats ms ON t.object_id = ms.object_id
    INNER JOIN sys.dm_db_partition_stats ps ON t.object_id = ps.object_id
    WHERE t.is_memory_optimized = 1
        AND ps.index_id < 2
    GROUP BY t.object_id, t.name, t.durability_desc, OBJECT_SCHEMA_NAME(t.object_id)
)
SELECT 
    SchemaName + '.' + TableName AS FullTableName,
    durability_desc AS Durability,
    FORMAT(RowCount, 'N0') AS [Rows],
    FORMAT(MemoryUsedMB, 'N2') + ' MB' AS [Table Memory],
    FORMAT(IndexMemoryMB, 'N2') + ' MB' AS [Index Memory],
    FORMAT(MemoryUsedMB + IndexMemoryMB, 'N2') + ' MB' AS [Total Memory],
    FORMAT(MemoryPerRowBytes, 'N0') + ' bytes' AS [Memory/Row],
    -- Memory efficiency indicator
    CASE 
        WHEN MemoryPerRowBytes > 10000 THEN 'Review for optimization'
        WHEN MemoryPerRowBytes > 1000 THEN 'Normal'
        ELSE 'Efficient'
    END AS MemoryEfficiency
FROM MemoryOptimizedAnalysis
ORDER BY MemoryUsedMB + IndexMemoryMB DESC;

Method 4: Azure SQL Database Optimization

Azure SQL Database Specific Query

-- Optimized for Azure SQL Database with DTU/vCore considerations
SELECT 
    SCHEMA_NAME(t.schema_id) AS SchemaName,
    t.name AS TableName,
    SUM(ps.row_count) AS RowCount,
    SUM(ps.reserved_page_count) * 8.0 / 1024 AS ReservedMB,
    -- Azure SQL specific calculations
    SUM(ps.reserved_page_count) * 8.0 / 1024 / 1024 AS ReservedGB,
    -- Estimate Azure SQL storage cost impact (approximate)
    FORMAT(SUM(ps.reserved_page_count) * 8.0 / 1024 / 1024 * 0.25, 'C') AS EstimatedMonthlyCost,
    -- Compression recommendations for Azure SQL
    CASE 
        WHEN SUM(ps.reserved_page_count) * 8.0 / 1024 > 100 
            AND NOT EXISTS (
                SELECT 1 FROM sys.partitions p 
                WHERE p.object_id = t.object_id 
                    AND p.data_compression_desc != 'NONE'
            )
        THEN 'Consider PAGE compression'
        WHEN SUM(ps.reserved_page_count) * 8.0 / 1024 > 1000
        THEN 'Consider table partitioning'
        ELSE 'Optimal'
    END AS AzureRecommendation
FROM sys.tables t
INNER JOIN sys.dm_db_partition_stats ps ON t.object_id = ps.object_id
WHERE t.type = 'U' AND ps.index_id < 2
GROUP BY t.object_id, t.schema_id, t.name
ORDER BY SUM(ps.reserved_page_count) DESC;

Azure SQL Database Resource Consumption

-- Analyze resource consumption patterns (Azure SQL Database)
SELECT 
    OBJECT_SCHEMA_NAME(ius.object_id) AS SchemaName,
    OBJECT_NAME(ius.object_id) AS TableName,
    SUM(ps.reserved_page_count) * 8.0 / 1024 AS SizeMB,
    -- Access patterns for cost optimization
    ISNULL(SUM(ius.user_seeks), 0) + 
    ISNULL(SUM(ius.user_scans), 0) + 
    ISNULL(SUM(ius.user_lookups), 0) AS TotalReads,
    ISNULL(SUM(ius.user_updates), 0) AS TotalWrites,
    -- Calculate read/write ratio for optimization decisions
    CASE 
        WHEN ISNULL(SUM(ius.user_updates), 0) = 0 THEN 'Read-Only'
        WHEN (ISNULL(SUM(ius.user_seeks), 0) + ISNULL(SUM(ius.user_scans), 0) + 
              ISNULL(SUM(ius.user_lookups), 0)) / ISNULL(SUM(ius.user_updates), 1) > 10 
        THEN 'Read-Heavy'
        ELSE 'Write-Heavy'
    END AS AccessPattern,
    -- Azure SQL optimization suggestions
    CASE 
        WHEN SUM(ps.reserved_page_count) * 8.0 / 1024 > 500 
            AND (ISNULL(SUM(ius.user_seeks), 0) + ISNULL(SUM(ius.user_scans), 0) + 
                 ISNULL(SUM(ius.user_lookups), 0)) < 100
        THEN 'Consider archiving or partitioning'
        WHEN SUM(ps.reserved_page_count) * 8.0 / 1024 > 1000
        THEN 'Monitor for auto-scaling triggers'
        ELSE 'Optimally sized'
    END AS AzureOptimization
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.tables t ON ps.object_id = t.object_id
LEFT JOIN sys.dm_db_index_usage_stats ius ON t.object_id = ius.object_id 
    AND ius.database_id = DB_ID()
WHERE t.type = 'U' AND ps.index_id < 2
GROUP BY ps.object_id
ORDER BY SUM(ps.reserved_page_count) DESC;

Method 5: Graph Tables Support (SQL Server 2017+)

-- Specialized analysis for graph tables (SQL Server 2017+)
SELECT 
    SCHEMA_NAME(t.schema_id) AS SchemaName,
    t.name AS TableName,
    CASE 
        WHEN t.is_node = 1 THEN 'Node Table'
        WHEN t.is_edge = 1 THEN 'Edge Table' 
        ELSE 'Regular Table'
    END AS GraphTableType,
    SUM(ps.row_count) AS RowCount,
    SUM(ps.reserved_page_count) * 8.0 / 1024 AS SizeMB,
    -- Graph-specific metrics
    CASE 
        WHEN t.is_node = 1 THEN 'Vertices: ' + FORMAT(SUM(ps.row_count), 'N0')
        WHEN t.is_edge = 1 THEN 'Edges: ' + FORMAT(SUM(ps.row_count), 'N0')
        ELSE 'N/A'
    END AS GraphMetric,
    -- Performance considerations for graph queries
    CASE 
        WHEN t.is_edge = 1 AND SUM(ps.row_count) > 1000000 
        THEN 'Consider edge table partitioning'
        WHEN t.is_node = 1 AND SUM(ps.row_count) > 100000 
        THEN 'Monitor node query performance'
        ELSE 'Optimal'
    END AS GraphOptimization
FROM sys.tables t
INNER JOIN sys.dm_db_partition_stats ps ON t.object_id = ps.object_id
WHERE ps.index_id < 2
    AND (t.is_node = 1 OR t.is_edge = 1 OR t.type = 'U')
GROUP BY t.object_id, t.schema_id, t.name, t.is_node, t.is_edge
ORDER BY SUM(ps.reserved_page_count) DESC;

Method 6: Intelligent Query Processing Integration (SQL Server 2019+)

-- Leverage Query Store for historical table size analysis
WITH HistoricalSizes AS (
    SELECT 
        qsq.query_id,
        qsp.plan_id,
        qsrs.avg_logical_io_reads,
        qsrs.avg_physical_io_reads,
        qsrs.execution_type_desc,
        LAG(qsrs.avg_logical_io_reads) OVER (
            PARTITION BY qsq.query_id 
            ORDER BY qsrs.last_execution_time
        ) AS prev_logical_io,
        qsrs.last_execution_time
    FROM sys.query_store_query qsq
    INNER JOIN sys.query_store_plan qsp ON qsq.query_id = qsp.query_id
    INNER JOIN sys.query_store_runtime_stats qsrs ON qsp.plan_id = qsrs.plan_id
    WHERE qsq.query_sql_text LIKE '%SELECT%FROM sys.dm_db_partition_stats%'
        OR qsq.query_sql_text LIKE '%sp_spaceused%'
),
CurrentTableSizes AS (
    SELECT 
        OBJECT_SCHEMA_NAME(ps.object_id) AS SchemaName,
        OBJECT_NAME(ps.object_id) AS TableName,
        SUM(ps.reserved_page_count) * 8.0 / 1024 AS CurrentSizeMB
    FROM sys.dm_db_partition_stats ps
    INNER JOIN sys.tables t ON ps.object_id = t.object_id
    WHERE t.type = 'U' AND ps.index_id < 2
    GROUP BY ps.object_id
)
SELECT 
    cts.SchemaName + '.' + cts.TableName AS FullTableName,
    FORMAT(cts.CurrentSizeMB, 'N2') + ' MB' AS [Current Size],
    -- Query performance insights
    CASE 
        WHEN AVG(hs.avg_logical_io_reads) > 10000 
        THEN 'High IO - Consider indexing'
        WHEN AVG(hs.avg_logical_io_reads) < 100 
        THEN 'Low IO - Efficient access'
        ELSE 'Normal IO'
    END AS PerformanceInsight,
    -- Intelligent QP recommendations
    CASE 
        WHEN cts.CurrentSizeMB > 1000 
        THEN 'Enable Adaptive Joins and Memory Grant Feedback'
        WHEN cts.CurrentSizeMB > 100 
        THEN 'Monitor for Batch Mode on Rowstore'
        ELSE 'Standard processing sufficient'
    END AS IQPRecommendation
FROM CurrentTableSizes cts
LEFT JOIN HistoricalSizes hs ON 1=1  -- Simplified join for demo
GROUP BY cts.SchemaName, cts.TableName, cts.CurrentSizeMB
ORDER BY cts.CurrentSizeMB DESC;

Method 7: PowerShell 7 & Modern Automation

# Modern PowerShell 7 script with parallel processing
using namespace System.Collections.Generic

param(
    [string]$ServerInstance = "localhost",
    [string]$Database = "master",
    [int]$ParallelBatchSize = 5,
    [switch]$IncludeAzureMetrics,
    [switch]$ExportToJson,
    [string]$OutputPath = "."
)

# Import modern SQL Server module
Import-Module SqlServer -Force

# Define modern T-SQL query
$modernQuery = @"
SELECT 
    OBJECT_SCHEMA_NAME(ps.object_id) AS SchemaName,
    OBJECT_NAME(ps.object_id) AS TableName,
    SUM(ps.row_count) AS RowCount,
    SUM(ps.reserved_page_count) * 8.0 / 1024 AS ReservedMB,
    SUM(ps.used_page_count) * 8.0 / 1024 AS UsedMB,
    SUM(ps.in_row_data_page_count) * 8.0 / 1024 AS DataMB,
    SUM(ps.lob_used_page_count) * 8.0 / 1024 AS LobMB,
    t.is_memory_optimized,
    t.temporal_type_desc,
    CASE WHEN t.is_node = 1 THEN 'Node' 
         WHEN t.is_edge = 1 THEN 'Edge' 
         ELSE 'Regular' END AS TableType
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.tables t ON ps.object_id = t.object_id
WHERE t.type = 'U' AND ps.index_id < 2
GROUP BY ps.object_id, t.is_memory_optimized, t.temporal_type_desc, t.is_node, t.is_edge
ORDER BY SUM(ps.reserved_page_count) DESC
"@

try {
    Write-Host "Analyzing table sizes using modern methods..." -ForegroundColor Green
    
    $results = Invoke-Sqlcmd -ServerInstance $ServerInstance -Database $Database -Query $modernQuery -TrustServerCertificate
    
    # Process results with modern PowerShell features
    $analysis = $results | ForEach-Object -Parallel {
        [PSCustomObject]@{
            FullName = "$($_.SchemaName).$($_.TableName)"
            RowCount = [long]$_.RowCount
            SizeMB = [math]::Round($_.ReservedMB, 2)
            DataMB = [math]::Round($_.DataMB, 2)
            LobMB = [math]::Round($_.LobMB, 2)
            IsMemoryOptimized = $_.is_memory_optimized
            TemporalType = $_.temporal_type_desc
            TableType = $_.TableType
            SizeCategory = switch ($_.ReservedMB) {
                {$_ -ge 1024} { "Large (>1GB)" }
                {$_ -ge 100} { "Medium (100MB-1GB)" }
                {$_ -ge 10} { "Small (10-100MB)" }
                default { "Tiny (<10MB)" }
            }
            EstimatedGrowth = if ($_.RowCount -gt 0) { 
                [math]::Round(($_.ReservedMB / $_.RowCount) * 1000, 4) 
            } else { 
                0 
            }
        }
    } -ThrottleLimit $ParallelBatchSize
    
    # Display results
    $analysis | Sort-Object SizeMB -Descending | 
        Format-Table FullName, 
                    @{Name="Rows"; Expression={$_.RowCount.ToString("N0")}; Align="Right"},
                    @{Name="Size (MB)"; Expression={$_.SizeMB.ToString("N2")}; Align="Right"},
                    @{Name="Category"; Expression={$_.SizeCategory}},
                    @{Name="Type"; Expression={$_.TableType}},
                    @{Name="Memory Opt"; Expression={if($_.IsMemoryOptimized){"Yes"}else{"No"}}} -AutoSize
    
    # Export to JSON if requested
    if ($ExportToJson) {
        $jsonPath = Join-Path $OutputPath "table_sizes_$(Get-Date -Format 'yyyyMMdd_HHmmss').json"
        $analysis | ConvertTo-Json -Depth 3 | Out-File $jsonPath -Encoding UTF8
        Write-Host "Results exported to: $jsonPath" -ForegroundColor Cyan
    }
    
    # Summary statistics
    $summary = @{
        TotalTables = $analysis.Count
        TotalSizeMB = ($analysis | Measure-Object SizeMB -Sum).Sum
        LargestTable = ($analysis | Sort-Object SizeMB -Descending | Select-Object -First 1).FullName
        MemoryOptimizedTables = ($analysis | Where-Object IsMemoryOptimized -eq $true).Count
        TemporalTables = ($analysis | Where-Object TemporalType -ne 'NON_TEMPORAL_TABLE').Count
    }
    
    Write-Host "`n=== SUMMARY ===" -ForegroundColor Yellow
    $summary.GetEnumerator() | ForEach-Object {
        Write-Host "$($_.Key): $($_.Value)" -ForegroundColor White
    }
}
catch {
    Write-Error "Failed to analyze table sizes: $($_.Exception.Message)"
    exit 1
}

Method 8: Advanced Analytics & Machine Learning Integration (SQL Server 2017+)

-- Predictive table growth analysis using SQL Server ML Services
WITH TableGrowthData AS (
    SELECT 
        OBJECT_SCHEMA_NAME(ps.object_id) AS SchemaName,
        OBJECT_NAME(ps.object_id) AS TableName,
        SUM(ps.row_count) AS CurrentRows,
        SUM(ps.reserved_page_count) * 8.0 / 1024 AS CurrentSizeMB,
        -- Simulate historical data points for ML
        ROW_NUMBER() OVER (ORDER BY SUM(ps.reserved_page_count) DESC) AS TableRank,
        -- Calculate growth velocity based on modification patterns
        CASE 
            WHEN SUM(ps.row_count) > 1000000 THEN 'High Growth'
            WHEN SUM(ps.row_count) > 100000 THEN 'Medium Growth'
            ELSE 'Low Growth'
        END AS GrowthCategory
    FROM sys.dm_db_partition_stats ps
    INNER JOIN sys.tables t ON ps.object_id = t.object_id
    WHERE t.type = 'U' AND ps.index_id < 2
    GROUP BY ps.object_id
),
PredictiveAnalysis AS (
    SELECT 
        *,
        -- Simple linear prediction (can be enhanced with R/Python integration)
        CurrentSizeMB * 1.2 AS Predicted3MonthSizeMB,
        CurrentSizeMB * 1.5 AS Predicted6MonthSizeMB,
        CurrentSizeMB * 2.0 AS Predicted12MonthSizeMB,
        -- Risk assessment
        CASE 
            WHEN CurrentSizeMB > 10000 THEN 'Critical - Monitor closely'
            WHEN CurrentSizeMB > 1000 THEN 'Warning - Plan capacity'
            ELSE 'Normal'
        END AS RiskLevel
    FROM TableGrowthData
)
SELECT 
    SchemaName + '.' + TableName AS FullTableName,
    FORMAT(CurrentRows, 'N0') AS [Current Rows],
    FORMAT(CurrentSizeMB, 'N2') + ' MB' AS [Current Size],
    GrowthCategory,
    FORMAT(Predicted3MonthSizeMB, 'N2') + ' MB' AS [3-Month Projection],
    FORMAT(Predicted6MonthSizeMB, 'N2') + ' MB' AS [6-Month Projection],
    FORMAT(Predicted12MonthSizeMB, 'N2') + ' MB' AS [12-Month Projection],
    RiskLevel,
    -- Capacity planning recommendations
    CASE 
        WHEN Predicted12MonthSizeMB > 50000 THEN 'Consider table partitioning'
        WHEN Predicted6MonthSizeMB > CurrentSizeMB * 3 THEN 'Review growth patterns'
        WHEN GrowthCategory = 'High Growth' THEN 'Monitor weekly'
        ELSE 'Standard monitoring'
    END AS CapacityRecommendation
FROM PredictiveAnalysis
ORDER BY CurrentSizeMB DESC;

Best Practices for Modern SQL Server Versions

1. Performance Optimization

-- Enable Query Store for better monitoring (SQL Server 2016+)
ALTER DATABASE [YourDatabase] SET QUERY_STORE = ON
(
    OPERATION_MODE = READ_WRITE,
    CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 90),
    DATA_FLUSH_INTERVAL_SECONDS = 900,
    INTERVAL_LENGTH_MINUTES = 60,
    MAX_STORAGE_SIZE_MB = 1024,
    QUERY_CAPTURE_MODE = AUTO,
    SIZE_BASED_CLEANUP_MODE = AUTO
);

-- Use Intelligent Query Processing features
ALTER DATABASE SCOPED CONFIGURATION SET BATCH_MODE_ON_ROWSTORE = ON;
ALTER DATABASE SCOPED CONFIGURATION SET BATCH_MODE_ADAPTIVE_JOINS = ON;
ALTER DATABASE SCOPED CONFIGURATION SET INTERLEAVED_EXECUTION_TVF = ON;

2. Modern Monitoring Setup

-- Create automated monitoring with SQL Server Agent (Enhanced for 2016+)
DECLARE @sql NVARCHAR(MAX) = N'
INSERT INTO TableSizeHistory 
SELECT 
    GETDATE() AS RecordDate,
    DB_NAME() AS DatabaseName,
    OBJECT_SCHEMA_NAME(ps.object_id) AS SchemaName,
    OBJECT_NAME(ps.object_id) AS TableName,
    SUM(ps.row_count) AS RowCount,
    SUM(ps.reserved_page_count) * 8.0 / 1024 AS ReservedMB,
    SUM(ps.used_page_count) * 8.0 / 1024 AS UsedMB,
    SUM(ps.in_row_data_page_count) * 8.0 / 1024 AS DataMB,
    SUM(ps.lob_used_page_count) * 8.0 / 1024 AS LobMB,
    t.is_memory_optimized,
    t.temporal_type_desc
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.tables t ON ps.object_id = t.object_id
WHERE t.type = ''U'' AND ps.index_id < 2
GROUP BY ps.object_id, t.is_memory_optimized, t.temporal_type_desc;';

-- Execute via SQL Server Agent job
EXEC msdb.dbo.sp_add_job 
    @job_name = 'Modern Table Size Monitoring',
    @enabled = 1;

3. Cloud-Ready Monitoring

-- Azure SQL Database compatible monitoring
IF EXISTS (SELECT * FROM sys.database_service_objectives)
BEGIN
    -- Azure SQL Database specific optimizations
    SELECT 
        'Azure SQL Database' AS Environment,
        service_objective AS ServiceTier,
        dtu_limit AS DTULimit,
        db_maxsize_in_bytes / 1024 / 1024 / 1024.0 AS MaxSizeGB
    FROM sys.database_service_objectives;
END
ELSE
BEGIN
    -- On-premises SQL Server
    SELECT 
        'On-Premises SQL Server' AS Environment,
        SERVERPROPERTY('Edition') AS Edition,
        SERVERPROPERTY('ProductVersion') AS Version;
END;

Conclusion

Modern SQL Server versions offer significantly enhanced capabilities for table size monitoring and analysis. The evolution from basic stored procedures to intelligent, predictive analytics represents a major advancement in database administration capabilities.

Key Takeaways:

  1. Use sys.dm_db_partition_stats for the most efficient queries in SQL Server 2016+
  2. Leverage JSON output for integration with modern applications
  3. Consider memory-optimized and temporal table implications
  4. Implement predictive analytics for proactive capacity planning
  5. Optimize for cloud environments with Azure SQL Database considerations
  6. Automate with modern PowerShell and parallel processing

Quick Modern Reference:

-- Modern one-liner for daily use (SQL Server 2016+)
SELECT 
    CONCAT(OBJECT_SCHEMA_NAME(ps.object_id), '.', OBJECT_NAME(ps.object_id)) AS [Table],
    FORMAT(SUM(ps.row_count), 'N0') AS [Rows],
    CONCAT(FORMAT(SUM(ps.reserved_page_count) * 8.0 / 1024, 'N2'), ' MB') AS [Size],
    STRING_AGG(CASE WHEN t.is_memory_optimized = 1 THEN 'Memory-Opt ' ELSE '' END +
               CASE WHEN t.temporal_type = 2 THEN 'Temporal ' ELSE '' END +
               CASE WHEN t.is_node = 1 THEN 'Graph-Node' 
                    WHEN t.is_edge = 1 THEN 'Graph-Edge' 
                    ELSE 'Regular' END, ', ') AS [Features]
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.tables t ON ps.object_id = t.object_id
WHERE t.type = 'U' AND ps.index_id < 2
GROUP BY ps.object_id
ORDER BY SUM(ps.reserved_page_count) DESC;

This modern approach ensures optimal performance, comprehensive analysis, and future-ready database administration practices.

Comments