title

text

PGConf.Russia 2025

PGConf.Russia is the largest PostgreSQL conference in Russia and the CIS. The event offers technical sessions, hands-on demos of new DBMS features, master classes, networking opportunities, and knowledge exchange with top PostgreSQL community experts. Each year, hundreds of professionals participate, including DBAs, database architects, developers, QA engineers, and IT managers.

Agenda highlights

  • Latest news and updates from the PostgreSQL global community

  • Monitoring, high availability, and security

  • Streamlined migration from Oracle, Microsoft SQL Server, and other systems

  • Query optimization

  • Scalability, sharding and partitioning

  • AI applications in DBMS

  • PostgreSQL compatibility with other software

  • more than
    0 participants
  • 0 speakers
  • 0
    minutes of conversation
  • 63 talks
  • hybrid
    format

Talks

Talks archive

PGConf.Russia 2025
  • Андрей Черняков
    Андрей Черняков UIS, CoMagic

    Making changes to tables under production load is always a complex task. For example, when you need to change a column type (e.g., from int to bigint or from timestamp to timestamptz), or move a table to a different tablespace without losing any changes that occur during the data migration.

    What if you have hundreds of such tables? With pg-transparent-alter-table, this is no longer a problem. These tasks can be solved with a single simple command:
    $ pg_tat -h 0.0.0.0 -d mydb -c "alter table mytable alter column id bigint"

    Key features include:

    • You can specify any number of alter table commands at once.
    • You can modify partitioned tables, supporting both the old inheritance-based partitioning and new declarative partitioning, including multi-level partitioning.
    • You can interrupt the process at any stage and continue later without losing progress from previous stages.
    • You can change your mind at any time, stop the execution, run "pg_tat --clean," and revert to the original state.
    • Custom commands for changing column order.
    • PostgreSQL version support: 11-17.

    After more than 5 years of existence (previously called transparent-alter-type), the project has become a reliable tool actively used in production. I would like to share my experience and discuss its capabilities.

  • Евгений Бузюркин
    Евгений Бузюркин PostgresPro
    Дарья Барсукова
    Дарья Барсукова НГУ
    Рустам Хамидуллин
    Рустам Хамидуллин PostgresPro

    In PostgreSQL performance testing, benchmarks measure query execution time (latency). To get more reliable results, queries are executed repeatedly, generating a dataset of latency values. Performance is often assessed using standard metrics like the median or mean, but we propose a more advanced approach.

    In practice, latency distributions are often multimodal, consisting of multiple underlying distributions with distinct characteristics. In such cases, traditional statistical methods are insufficient, requiring a more detailed analysis of the dataset’s structure.

    Our work presents a tool that automatically performs statistical analysis of benchmark results, accounting for dataset-specific features. It detects multimodality, identifies the number and boundaries of dominant modes, and determines key distribution parameters—providing deeper insights into PostgreSQL performance variations.

  • Александр Попов
    Александр Попов PostgresPro

    This talk will explore different approaches to storing files in PostgreSQL, including:

    • Simple table-based storage

    • Large objects with pg_largeobject

    • pgpro_sfile – a large object (pgpro_bfile) storage solution

  • Christopher Travers
    Christopher Travers

    Where I used to work, we had pushed ElasticSearch to its breaking point. We needed an even more scalable replacement for a write-heavy, read-seldom workload. So we built one on PostgreSQL. Now, many of us are building the successor as an open source project. 

    This talk goes over the design of Bagger (named after the giant mining machines), which can manage logs into tens or hundreds of petabytes. More than just a review of the architecture, this talk focuses on the whys and the tradeoffs made in the design. 

    The talk is intended both to showcase how programmable and powerful PostgreSQL is, but also illustrate the fundamental tradeoffs which must be faced when pushing any technology into the big data space.

All talks

Informational