AWS Database Migration Service (DMS) – Production-Ready Guide
By Bishal Dhimal | Dec 2025
📌 Table of Contents
1. Planning Your Database Migration
2. Schema & Code Migration
3. Unsupported Data Types
4. DMS Migration Scenarios
5. Preparation for Production Migration
6. Database Migration Steps
7. Monitoring & Validation
8. Best Practices
9. References
1️⃣ Planning Your Database Migration
- Source & Target Endpoints: Identify databases, tables, and schemas. DMS supports table creation and primary keys, but secondary indexes, foreign keys, and user accounts must be created manually.
- Network Connectivity: Ensure databases are reachable via VPN, Direct Connect, or public endpoints with proper security groups.
- Replication Subnet Group: DMS replication instances need subnets in at least two AZs for high availability.
2️⃣ Schema & Code Migration
While DMS migrates data, schema and code migration is separate.
Schema Migration:
- Tables & columns
- Primary & foreign keys
- Indexes & constraints
Code Migration:
- Stored procedures, functions, triggers, views
- Packages/modules and custom scripts
Tools:
- AWS Schema Conversion Tool (SCT): Converts schemas and code between different engines.
- Native Tools (for same-engine migration): MySQL Workbench, Oracle SQL Developer, pgAdmin
3️⃣ Unsupported Data Types & Transformations
Plan for type conversions:
- NUMBER → BIGINT or DECIMAL
- DATE → TIMESTAMP
- Boolean → TINYINT or BOOLEAN
💡 Use SCT to generate conversion scripts and manually adjust
unsupported items.
4️⃣ AWS DMS Migration Scenarios
| Source | Target | Notes |
|---|---|---|
| On-Prem MySQL | Amazon RDS MySQL | Full or incremental replication |
| Oracle | Aurora PostgreSQL | Heterogeneous migration; use SCT |
| RDS MySQL | S3 / Glacier | Data archiving and cost optimization |
5️⃣ Preparation for Production Migration
- Create replication subnet group (multi-AZ recommended).
- Provision replication instance with sufficient CPU/memory.
- Create source & target endpoints with secure access.
-
Create DMS IAM roles:
- dms-cloudwatch-log-role
- dms-vpc-role
- dms-access-for-endpoint (Redshift target)
- Test connectivity between endpoints before migration.
6️⃣ Database Migration Steps
6.1 Prepare Source Database (MySQL Example)
sudo apt update
sudo apt install mysql-server -y
sudo mysql_secure_installation
sudo systemctl enable mysql
sudo systemctl start mysql
Create a migration user:
CREATE USER 'migration_user'@'%' IDENTIFIED BY 'StrongPassword123';
GRANT ALL PRIVILEGES ON *.* TO 'migration_user'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
Create demo database & tables:
CREATE DATABASE library;
USE library;
CREATE TABLE authors (
author_id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100),
country VARCHAR(50)
);
CREATE TABLE books (
book_id INT AUTO_INCREMENT PRIMARY KEY,
title VARCHAR(150),
genre VARCHAR(50),
author_id INT,
FOREIGN KEY (author_id) REFERENCES authors(author_id)
);
INSERT INTO authors (name, country) VALUES
('J.K. Rowling', 'UK'),
('George Orwell', 'UK'),
('Mark Twain', 'USA');
INSERT INTO books (title, genre, author_id) VALUES
('Harry Potter and the Sorcerer''s Stone', 'Fantasy', 1),
('1984', 'Dystopian', 2),
('Animal Farm', 'Satire', 2),
('The Adventures of Tom Sawyer', 'Adventure', 3);
⚠️ For production, use strong passwords and limit user access via
security groups instead of public access.
6.2 Create Target Database (RDS)
- Engine: MySQL or Aurora
- Instance class: appropriate for workload
- Multi-AZ, encryption, automated backups enabled
- Security group allows DMS replication
6.3 Create DMS Replication Instance
- Name: dms-replication-instance
- Multi-AZ recommended for production
- Security group allows access from source database
6.4 Create Source & Target Endpoints
- Provide host, port, username, password
- Use SSL if possible
- Test endpoint connection before starting migration
6.5 Create Migration Task
- Define tables and schemas to migrate
- Select migration type: Full Load / CDC / Full Load + CDC
- Map schemas and transform data types if needed
6.6 Monitor Migration Task
- Use DMS console & CloudWatch metrics
- Track throughput, latency, and errors
- Validate data consistency after migration
7️⃣ Monitoring & Validation
- CloudWatch metrics: replication instance CPU, memory, storage
- CloudWatch logs: task errors and warnings
- Validate row counts, checksums, and key statistics
8️⃣ Best Practices
- Use SCT for heterogeneous migrations
- Enable Multi-AZ & encryption
- Use CDC for minimal downtime
- Secure network access with private subnets or VPN/Direct Connect
- Provision sufficient replication instance resources
- Enable CloudWatch logs & alarms
- Test migration in Dev/QA before production cutover