title

text

Oleg Bartunov
Oleg Bartunov Postgres Professional
Teodor Sigaev
Teodor Sigaev Postgres Professional
Alexander Korotkov
Alexander Korotkov Postgres Professional
: December
45 мин

What to expect in Pg 11 ?

Слайды

Видео

Другие доклады

  • Egor Rogov
    Egor Rogov Postgres Professional
    90 мин

    Tutorial: More indexes, good and various

    "And telling GIN from SP-GIST was quite beyond his wit, we found", said the classic. Can you? This masterclass is about not-so-often used index types (compared to conventional B-tree) which however can do a great job for you. We will look into internal mechanics of these indexes and discuss cases where they can be successfully applied. Also we will talk about some peculiarities of PostgreSQL index access. To spend time efficiently, listeners are required to have basic knowledge of PostgreSQL and should be used to read plans of simple queries.

    Materials of the master class

    Backup copy of the database with demo data can be downloaded here:

  • Christopher Travers
    Christopher Travers DeliveryHero SE
    45 мин

    PostgreSQL at 20TB and Beyond

    In the last six months I have been working with a massive OLAP environment with 20TB shards, spanning around 400TB of data. Come to listen to how we make it all work, the challenges, and the skills involved. This talk has very little in common with the 10TB and Beyond talk because the data environments are very different.

    We will cover analytics performance, data alignment, reasons for building extensions in C, and moving data around between servers in multiple data centers.

  • Mikhail Tyurin
    Mikhail Tyurin ИТ предприниматель
    Konstantin Evteev
    Konstantin Evteev X5 FoodTech
    45 мин

    Recovery use cases for Logical Replication in PostgreSQL 10

    Avito is the biggest classified site of Russia, and the third largest classified site in the world (after Craigslist of USA and 58.com of China). In Avito, ads are stored in PostgreSQL databases. At the same time, for many years already the logical replication is actively used. With its help, the following issues are successfully solved: the growth of data volume and growth of number of requests to it, the scaling and distribution of the load, the delivery of data to the DWH and the search subsystems, inter-base and internetwork data synchronization etc. But nothing happens "for free" - at the output we have a complex distributed system. Hardware failures can happen - it is natural - you need to be always ready for it. There is plenty of samples of logical replication configuration and lots of success stories about using it. But with all this documentation there is nothing about samples of the recovery after crashes and data corruptions, moreover there are no ready-made tools for it. Over the years of constantly using PgQ replication, we have gained extensive experience, rethought a lot, implemented our own add-ins and extensions to restore and synchronize data after crashes in distributed data processing systems. In this report, we would like to show how our experience can be shifted to a new logical replication subsystem in 10th version of PostgreSQL. In the current implementation, these are only non-trivial solutions - there is a number of issues for the community, that come down to implementing simple recovery mechanisms - as simple as configuring the replication in 10th version.

  • Anatoly Soldatov
    Anatoly Soldatov Компания - ЗАО ЛАНИТ
    22 мин

    How to become 5 times faster or the story of our implementation of parallel migration in Liquibase

    Liquibase is a very convenient tool for sequential database migration, used both on our projects and in a large number of other projects and frameworks. It allows you to keep the code of the database together with the application code in VSC, track the attempts of repeated migrations and much, much more. But sooner or later the project grows, the data occupy terabytes, and liquibase still rolls the migration sequentially.

    We could not afford to deploy migrations for 100 hours and came up with a framework for liquibase that expanded its capabilities and allowed to execute a whole series of scripts in parallel or to split one large migration into small partitions and migrate them in parallel.