Life cycle of Database Migration from Oracle, DB2 to PostgreSQL
Many businesses which use Database management systems like Oracle, DB2 & MS SQL are unreliable these days. Moreover, the costs incurred in maintaining these systems and its product licenses keeps on increasing. As the competitors are migrating over to the new technologies and tools available in the market, it is necessary for these businesses to migrate to new environment which is efficient, consistent and reliable to stay in the market and the technologies used in the current environment have become obsolete or no longer serve the business purpose. PostgreSQL has emerged as a top open-source RDBMS software. Since there is no licensing cost associated with it most of the companies are planning to migrate the databases which are currently running on other RDBMS like Oracle, DB2, MS SQL server to PostgreSQL. This report summarizes the various methodologies, procedures and techniques involved in successfully migrating the data from Oracle to PostgreSQL & DB2 to PostgreSQL. Migration is not a simple effort there should be proper planning and testing involved in this right from database connectivity to performance analysis. In this paper we are going to cover most of the steps which we need to consider before the migration and after the migration like choosing the correct tools for implementing the migration, time taken to migrate ,data compatibility, code conversion, application connectivity to database, database configuration parameters, performance analysis, replication setups, database monitoring, patching and backup strategies.
Слайды
Abhinav Sagar and Premnath - migration from DB2 to postgresql.pptxВидео
Другие доклады
-
Анатолий Анфиногенов АО "ВНИИЖТ" Зам. директора научного центра - начальник отдела разработки ПО
Как я перестал беспокоиться и перенес 60K строк из 150 процедур PL/SQL в Postgres.
В докладе рассказывается об опыте переноса серверного приложения, работающего на полигоне железных дорог от Калининграда до Хабаровска, с Oracle 11g Standard Edition на ванильный PostgreSQL 11.5.
На момент начала миграции база данных насчитывала порядка 200 хранимых процедур на языке Oracle PL/SQL общим объемом порядка 60000 строк (которые создавались с 2006 года, т.е. уже более 12 лет), около 250 таблиц и 50 Гбайт данных.
Доклад содержит описание сопровождавших этот процесс приключений, приятных и неприятных открытий, а также пролог, эпилог и хэппи-энд.Повествование ведется от лица пользователя Oracle, открывающего для себя Postgres.
-
Виктор Еремченко Miro Team Lead
Отказоустойчивый кластер PostgreSQL + Patroni
Я расскажу, как мы комплексно подошли к проблеме отказоустойчивости PostgreSQL, какие варианты мы рассматривали и как остановились на Patroni.
Доклад содержит этапы тестирования этого решения, как мы обеспечили быстрое внедрение на production и с какими проблемами мы столкнулись и как их решали.
-
Владимир Алешин Avito Инженер
Хранимки as code
В рамках доклада поговорим о хранимых процедурах и функциях и коде внутри базы данных в целом. Посмотрим на хранимки с разных сторон: глазами DBA, разработчика баз данных и разработчика серверной части. Попытаемся понять что такого особенного в хранимых процедурах, что зачастую препятствует применению сложившихся практик по работе с исходным кодом. Поговорим о том как выглядят best practice по работе с кодом в разрезе хранимых процедур.
Доклад в большей степени ориентирован на разработчиков баз данных и серверных разработчиков, и является скорее мотивационным, чем хардкорно-техническим.
-
Shawn Kim Apposha CEO
Make Your PostgreSQL 10x Faster on Cloud in Minutes
Cloud storage has some unique characteristics compared to traditional storage mainly because it is virtualized and controlled by software. One example is that AWS EBS shows higher throughput with larger I/O size up to 256 KiB without hurting latency. Hence, a user can get only about 4 MiB/sec with 1,000 IOPS EBS volume if the I/O request size is 4 KiB, whereas a user can get about 250 MiB/sec if the I/O request size is 256 KiB. This is because EBS consumes one I/O in a given IOPS budget for every I/O request regardless of the I/O size (up to 256 KiB). Unfortunately, PostgreSQL cannot exploit the full potential of cloud storage because PostgreSQL has designed without considering the unique characteristics of cloud storage.
In this talk, I will introduce the AppOS extension that improves the throughput of a write-intensive workload by 10x by transparently making PostgreSQL cloud storage-native. AppOS works like a storage driver that efficiently exploits the characteristics of cloud storage, such as I/O size dependency to storage throughput and latency, atomic write support in cloud block storage, and fast, but non-durable local SSDs. To do this, AppOS comprises a Linux-compatible file I/O stack including virtual file system, page cache, block I/O layer, cloud storage driver. On top of the file I/O stack, syscall module supports registering pre- and post-handler for file I/O-related system calls in order to transparently work without modifying PostgreSQL codes.
I will focus on presenting key use cases and performance results of the AppOS extension after explaining the internals. Specifically, I will show the performance results of OLTP and some batch workloads using standard benchmarking tools like pgbench and sysbench. I will also present performance results and implications on multiple clouds including AWS, GCP, and Azure.