Andrey Rudometov

Postgres Professional


Hello, native multimaster? Comparison bidirectional replication and Postgres Pro Multimaster


Since PostgreSQL 9.5 logical replication has replication origin infrastructure for replication progress tracking. Now 16th version is coming with feature which made bi-directional replication setups possible using said infrastructure. In earliest reviews it got a nickname of "native multimaster" for cluster, interconnected with said replication, bearing some resemblance to PostgresPro's Multimaster. We're going to figure out degree of it and explain insides of them. Talk expects you to have a basic knowledge of WAL insides and it's role in PostgreSQL functioning.

Mikhail Maslov


Database Administrator

Multi-transactions and possible issues with them


In this presentation, we cover multi-transactions, explain when they appear and how they can be monitored and checked. We also tackle the possible issue with multi-transaction ID wraparound and the problem of file absence in pg_multixact/offsets when restoring from a backup.

Elena Skvortsova

IT Expertise

Head of Technical Expertise

Migration of high-loaded 1C solutions to Linux/Postgres infrastructure


How to migrate a high-loaded information system based on 1С:Enterprise from MS Windows/MS SQL Server to Linux/PostgreSQL and don't face considerable pains?

We'll share our migration strategies and what was key to its success. We'll talk about projects involving migrations and pay attention to some nuances.

Maksim Afinogenov

OKBM Afrikantov


Experience in porting the production management system database from Oracle DBMS to PostgresPro DBMS in a manufacturing enterprise


The practice of transferring structure, logic and data from Oracle DBMS to PostgresPro DBMS. Features and main difficulties of migration. Advantages of PostgresPro in terms of porting logic.

Yury Plotnikov


Head of Department

Андрей Григорьев

Ростелеком Информационные технологии


The story on the development of a database change management solution and what we lacked in Liquibase and Flyway


We will tell you how we store database objects in the version control system. Then we compare the traditional approaches to manage database migrations and the one we preferred. We will talk about both methods advantages and disadvantages. Finally, we will present our change management solution - pgmig.

Anatoly Anfinogenov

VNIIZHT JSC (a subsidiary of RZD OJSC)

Deputy Director of the Research Center, Head of VNIIZHT Software Development Department

Vacuum Therapy: Treatment of Chronic DB Disease


Three years ago we successfully migrated our application from Oracle DBMS to vanilla PostgreSQL. After that we faced with "childhood" diseases of our application on the new DBMS, which were successfully cured, and some "chronic diseases", which took much longer to deal with. One of the most memorable problems was the problem of DB performance degradation. The source of this problems was insufficient vacuuming of our database. The experience of understanding and solving this problem is offered to your attention in the form of practical recommendations to prevent DB tables and indices bloat and setting up PostgreSQL VACUUM/autovacuum parameters.

Igor Kosenkov

Postgres Professional

Lead Engineer

Corosync-Pacemaker ha-cluster. Work on mistakes


I will talk about common mistakes when setting up a Corosync-Pacemaker failover cluster. Often these mistakes lead to fatal consequences, and as a result - to the rejection of the chosen solution in favor of others. Want a recipe for the "right" cluster?

Leonid Churikov


Lead Analyst

Building the protection system based on Postgres DBMS


Why would you need the domestic DBMS? 1. Because of law requirements: only domestic DBMSes are appropriate for data protection. 2. In terms of formalization: currently, it is impossible to purchase MS in Russia legally. 3. In terms of pricing: the licensing mechanism of foreign DBMS systems significantly increases the expenditures on protection. 4. In terms of performance: it depends on the optimization of information protection tools, which are currently developed for domestic DBMS systems (in order to reveal why you may refer to the first paragraph). 5. In terms of practice: by the end of last year we’ve finished transition to the domestic databases and everything works smoothly. What else does an information security officer need?

Anton Nemtsev

Netrika LLC

Head of Automation

Case of moving a DBMS with business logic to Postgres PRO


I will share the experience of migration project of an information system from a Microsoft SQL Server DBMS to Postgres PRO. The owner of the DBMS is Rosagroleasing JSC.

The volume of the DBMS database is about 6 TB (more than 300 tables, up to 4 billion records in each). The team adapted 15 ETL packages and 131 procedures with full functionality and also partial refactoring of the source code. Systems' performance has remained the same.

Bruce Momjian


Vice President, Postgres Evangelist

Beyond Joins and Indexes


My presentation "Explaining the Postgres Query Optimizer" covers the details of query optimization, optimizer statistics, joins, and indexes. This talk covers 40 other operations the optimizer can choose to handle complex queries, large data sets, and to enhance performance. These include merge append, gather, memoize, and hash aggregate. It explains their purpose and shows queries that can generate these operations.

This is a new talk; draft slides are at

Vasily Timoshenko

Russian Railways Center for Digital Technologies

Sector Head

Experience of migration a highly loaded system from DB2 for z/OS to PostgresPro in multimaster configuration


The developer shares his experience of migration a highly loaded system in the field of railway transport from IBM DB2 for z/OS to PostgresPro in a multimaster configuration. The main focus is on the nuances of working with a cluster operating in multimaster mode, what problems had to be faced and how to solve them.

Pavel Luzanov

Postgres Professional

Head of Educational Programs

Pavel Konotopov

Postgres Professional

Head of Fault Tolerance for PostgreSQL Solutions

Five shades of sharding


The importance of sharding is now colossal. The size of today's databases exceeds 100 terabytes, scaling up vertically, and adding replicas containing a complete physical copy of the database is becoming difficult, especially when computational resources are scarce. Database sharding is a way to scale horizontally by partitioning data between compute nodes that are independent of each other.

In the PostgreSQL world, there are well-known scaling tools - CitusDB, and Greenplum - as well as new generation solutions - Cockroach DB, Yugabyte DB, SPQR, and Shardman.

In this talk, we will discuss the difference between these implementations, the advantages, and disadvantages of these solutions, examine the current state of implementation of sharding in vanilla PostgreSQL, and also touch on another important topic - providing data integrity and consistency guarantees across a distributed cluster.

Andrey Zubkov

Postgres Professional

Senior DBA

The new sights of observability in Postgres and PostgresPro databases


In this talk I’ll show some new abilities in postgres observability. There will be Graphana visualization of pg_profile/pgpro_pwr data, manual highlighting of interesting objects in pg_profile/pgpro_pwr reports, further development of extended vacuum statistics in PostgresPro releases and a new session and query tracing technique in pgpro_stats extension

Alexander Burtsev

Postgres Professional

Head of Product

Darya Lepikhova

Postgres Professional

Software Engineer

Vladimir Lipunov

Sternberg Astronomical Institute of Moscow State University


Maksim Emelin

Postgres Professional

Lead DBA

Using Debezium as a tool for delta migration


Delta migration case using Debezium is considered in details as well as the process of change data capture, sinking, applying configurations and load testing.

Alexander Nikitin

Data Egret


Ivan Frolkov

Postgres Professional

Engineering Consultant

Evgeniy Brednya

Postgres Professional

Head of Support

Corrupt Data Recovery


How to recover corrupt data.

Vasiliy Puchkov

Gazpromneft - CR LLC.

Head of Databases

Odyssey of corporate-scale migration to PostgreSQL

  • Scylla Charybdis of project management;
  • Sirens of personal goals:
  • Circe for IT professionals:
  • Polyphemus of information security.

The tricky part of taking the journey in less than ten years.

Alfred Stolyarov



How we switched Oracle for PostgreSQL for a client, before it became mainstream


The history of import substitution did not start last year after well-known events. Its launch dates back to 2014. It was from this year that state and near-state companies began to think of switching to the so called "recommended software". One of these companies approached us back in 2020 with a project to move from Oracle to PostgreSQL. This project was designed to solve the accumulated architectural problems (imperfect storage system for telemetry data, the DBMS itself worked inside a virtual machine), and optimize the use of disk space (make space in the main storage, debug saving archived data, ensure correct backup). Since the customer's system should have worked uninterrupted 24/7, it was necessary to switch from one DBMS to another "seamlessly" without downtime, with simultaneous operation of both to ensure step-by-step translation of subsystems and the ability to control the correctness of the data. And, of course, the work had to be completed as quickly as possible.

In the report we will discuss how we managed to solve this case.

Anton Doroshkevich


Chief Information Officer

Anton Doroshkevich


Chief Information Officer

Pavel Konotopov

Postgres Professional

Head of Fault Tolerance for PostgreSQL Solutions

RTT, RTO, RPO and synchronous replication


How does network latency affect performance when using synchronous replication? If a primary node fails, how long will our cluster not be reachable? How much data can be lost? Will there be any anomalies when reading from standby? We are designing our network architecture, give us a minimum RTT value for cluster solution! Many DBAs have to face these issues when designing and implementing cluster solutions. In this short talk we will tell how to quickly estimate RTT, RTO, RPO values, with the help of which Open Source tools one can test cluster solution and experimentally verify the design parameters, which metrics values can be used as a guide in design.

Igor Alov



Pgpool-II in the "Master-Master" mode or Network engineer’s overview of the PostgreSQL load balances solution.


One of the main needs of high-load database projects is a right configured load balancer within the database cluster. Most of the solutions I worked with, including Pgpool-II, could not fully suit business requirements. In this article me as a network engineer with broad experience will cover tricky configurations of the Pgpool-II operating in the “Master-Master” mode, as well as consider cases in which similar solutions are indispensable.

Pavel Tolmachev

Postgres Professional

Education Department Specialist

Let's get acquainted with GEQO in 20 minutes

-----------------------------------------------------------QUERY PLAN--------------------------------------------------------------
Hash Join
  Hash Cond: (Subject = GEQO)
  -> Hash Join
        Hash Cond: (**Optimizer task = choose the best query execution plan**)
        -> Seq Scan on **The number of potential plans grows exponentially as the number of tables in a query increases**
        -> Hash
              -> Seq Scan on **PostgreSQL solves this problem by using the genetic optimizer (GEQO)**
  -> Hash
        -> Seq Scan on **Topics of the report:**
              Filter: (**(What is GEQO)** AND **(Pros and cons)** AND **(How it works)**)
(10 rows)

Alexey Borschev

Postgres Professional


Evgeniy Zhukov

MMTR Technologies

System Administrator

Database upgrade with pg_copydb


pg_copydb is a new tool that automates database copy between two running Postgres servers. In this talk we will review the utility, share results of testing and practical use of utility for Postgres database upgrade.

Alexey Fadeev


Senior .NET Developer, Postgres Evangelist

Igor Suhorukov

MTS Digital

CJM Tech Lead

How to put the whole world into a regular laptop: PostgreSQL and OpenStreetMap


I'll show in PostGIS how everyone can analyze the geodata of the entire Earth and get answers to their global questions in minutes and seconds.

When you use a taxi in small towns, calling a car by phone, then with a high probability your trip is charged by the program based on OSM data. For billing, one of the routing packages is used. Through this use case, the taxi company employees put the house number and street on the buildings and contribute not only to their business, but also to OpenStreetMap.

The data analytics scenario also includes the tasks of where it is better to place an outlet so that buyers come to it. Again, data on walking distance and population of the surroundings can be extracted from geodata. You can calculate the value of real estate based on many factors related to the location of the object and its surroundings.

Scientists can build predictive models to predict epidemics, urban evolution, plan recreational areas and development of existing territories based on open geodata.

Well, you can answer any geography question that comes to your mind: calculate the area of cities and buildings, the length of roads and extract the names of cities, regions and islands. You can, for example, become a champion in the game of "Cities" or establish a new service for renting electric scooters. Everything is limited only by your imagination.

I published - my project of high performance data loader, which allows you to perform geoanalytics on data from OpenStreetMap in PostGIS. It transform OpenStreetMap World/Region PBF dump into partitioned by H3 regions schema. Columnar storage option activate CitusDB extension in PostgreSQL to speedup aggregation queries.

Andrey Borodin


Software Developer, Head of OSS RDBMS Development

Review of some historical CVEs


I'm not a real security expert. But this will help me to do ELI5 explanation of most funny historical Postgres CVEs.

Vladimir Serdyuk



Heterogeneous distributed system - as a way to safely migrate from MSSQL Server to PostgreSQL, as well as reduce sanctions risks


This report is primarily intended for companies, specifically their IT departments, that are running Russian systems 1C 8.x and have the ability to work with both MSSQL Server and PostgreSQL. We live in a unique time when having a heterogeneous IT system (a system with a distributed architecture where each database instance runs under different databases and/or has a different data structure) is justified both economically and in terms of potential risks. On the one hand, we store data and use databases with predictable behavior and open source code, regardless of the political situation. On the other hand, with this approach, we take advantage of all the benefits (primarily performance) of a powerful database supported by the largest vendor, albeit a hostile state to us. Now is the time to assess risks with the necessary level of paranoia. After all, data can be corrupted not only at the level of storage logic, but also by the method of additional "hidden" calls of Delete/Update constructs at the engine level of the database. Therefore, it is now relevant not only to monitor the performance of the IT system, but also to audit data and their timely integrity. It is necessary to implement data verification procedures, it is necessary to implement procedures for closing periods, procedures for fault-tolerant storage. And accordingly, to provide for various threat models in data recovery procedures. The report presents options for countering such threats and scenarios for the most seamless transition of large databases to PostgreSQL, since it is precisely for such databases that the problem of translating to a new DBMS is particularly acute.

Christopher Travers

Independent Community Member

Principal Engineer

PostgreSQL vs Redis: Making the Right Choice


With the rise of NoSQL databases, a number of falsehoods have flourished regarding how to choose a database engine. This talk focuses specifically on Redis and PostgreSQL, and why one might choose one or the other.

At small scales, we can often get by thinking of database servers as black boxes, but as we scale, the internals and architecture become more and more important. This talk focuses on behavior of these systems at scale and under load.

In this presentation you will learn:

  • How Redis and PostgreSQL differ architecturally
  • How differences in architecture affect scalability and performance
  • Cases here Redis is the clear winner
  • Cases where PostgreSQL is the clear winner

Additionally, some notes will be offered in terms of where PostgreSQL can improve in to compete with the sorts of workloads that generally favor Redis.

Dmitry Umnikov

Head of System Analysis

Migrating Oracle to PostgreSQL in a high-load system with microservice architecture


A report on our experience of implementing PostgreSQL instead of Oralce in a highly loaded system with a microservice architecture that processes several terabytes of data. Let's talk about how we went from piloting Greenplum to moving to several interacting PosgreSQL databases with different load profiles, and about the problems we encountered in the process.

Maksim Milyutin

Huawei RRI

Engineer for Key Projects

Analytical open-source solutions based on PostgreSQL


Historically PostgreSQL was intended to transactional OLTP workload. This thesis is confirmed by row-based kind of storage and impossibility (or some complication) in building distributed engine of query execution based on MPP principles. However, due to extensibility of PostgreSQL core (first of all, by using of pluggable access methods) and tolerant license policy similar to BSD there were appeared new different forks and extensions allowing effective processing of big data in analytical manner.

In current talk I'm going to review the PostgreSQL fork called Greenplum and Citus and TimescaleDB extensions from system developer's perspective by comparing their common analytical engine features: column storage, data compression, distributed query execution and so on. The results of such overview will be helpful to database architects seeking PostgreSQL-based DBMS for analytical workload.

Ivan Chuvashov



Christopher Travers

Independent Community Member

Principal Engineer

Crazy things you can do with PostgreSQL Indexes


Of the relational databases, PostgreSQL is fairly unique in the indexing capabilities it offers. While most of us are familiar with the use of indexes to speed performance when filtering on columns, PostgreSQL indexes can do far more than this.

This talk will focus on cases where difficult problems were solved through the creative use of indexes. Each of these cases is from an episode in my career.

In this presentation, you will learn:

  • The general index access methods PostgreSQL supports out of the box
  • How the planner uses indexes (only on a high level)
  • Functional indexes
  • Creative use of Index Only scans.

This is a talk for all audiences. The talk is primarily for beginner to intermediate users, but should have enough information for more advanced users to get some insight or inspiration from the talk as well.

Alyona Rybakina

Postgres Professional

Software Engineer

Igor Melnikov

Postgres Professional

Engineering Consultant

Oracle-like packages in Postgres Pro Enterprise 15


One of the main problems when migrating from Oracle to PostgreSQL is PL/SQL packages.

The PostgreSQL DBMS lacks such functionality - this leads to the need to rewrite (up to its redesign!) the Oracle PL/SQL code to PL/pgSQL.

With the new version of its flagship Postgres Pro 15 Enterpise distribution, Postgres Pro has added support for package functionality: the PL/pgSQL syntax has been extended and new mechanisms have been added to the stored procedure runtime environment. The report will focus on supporting the functionality of packages in Postgres Pro 15 Enterpise: - grouping types of variable procedures and functions into a package; - support for the package initialization section; - support for global package variables available throughout the "life" of the session.

This presentation will also discuss the features of migration of packages from Oracle to Postgres Pro Enterprise DBMS.

Vigneshwaran C


Software Lead Developer

Logical replication internals


The following topics will be covered as part of the presentation:

  • Architecture of logical replication
  • Publisher introduction
  • Subscriber introduction
  • Data syncronization introduction
  • Logical decoding
  • Replication slot
  • output plugin

Michael Shurutov

Свободный художник

Database Maintenance Engineer

There is a set of ansible roles for postgres.


There is set of ansible roles for any postgres configuration: standalone instance; streaming replica; patroni+etcd; stolon+etcd; patroni+consul; stolon+consul.

Andrey Chibuk

FORS Telecom LLC

Lead Expert

Alexander Liubushkin

"FORS Telecom" LLC


How to transfer 10TB from Oracle to Postgres in 24 hours?


We offer to your attention our experience in data migration and the Ora2PgCopy program written in Java for high-speed data transfer from Oracle to Postgres, which is used after creating tables and transferring the program code of application systems. High data transfer speed is provided by using the Postgres command “copy”, using multithreaded Java technology for file processing, managing the nologged/logged table option, and supporting LOB and CLOB data types. According to the test results, Ora2PgCopy works noticeably faster than such analogues as: Ispirer (convertum), oracle_fdw, ora2pg, Pentaho kettle. Ora2PgCopy can function as a module as part of the LUI4ORA2PG migration automation system or independently of it. The history of the growth of the Live Universal Interface (LUI) web application development tool and the LUI4ORA2PG migration tool can be found in previous presentations at PGConf conferences: , ,,,