title

text

Teodor Sigaev
Teodor Sigaev Postgres Professional
15:20 25 October
22 мин

Why do we need 64-bit transaction IDs?

When PostgreSQL was at its formation stage, its transaction ID was chosen to be 32-bit. Back then, no one could imagine that someday we may need more than 4 billion transactions. However, ongoing technical progress and digitization pushed some Postgres instances towards their transaction ID limit. The Postgres community reacted to this with a wraparound of the transaction ID counter. However, constantly growing data volumes exposed PostgreSQL to new challenges. In my presentation, I will cover these challenges and explain how they can be solved with a 64-bit transaction ID, what the consequences will be like, why now it is a good time to implement 64x IDs, and why this hasn't been done previously.

Слайды

Видео

Видео доступно участникам мероприятия, выполнившим вход в личный кабинет

Другие доклады

  • Oleg Bartunov
    Oleg Bartunov Postgres Professional
    Nikita Glukhov
    Nikita Glukhov Postgres Professional
    45 мин

    Elegant K-nearest neighbor (KNN) searching in PostgreSQL

    Finding the nearest neighbor can be required for various tasks. For example, when you need to find the closest object to a given point on the map. This task looks trivial to non-programmer (a person can easily cope with it if they have a map). In a software developer's reality, this task doesn't have a common solution available to everyone. To get rid of this headache, programmers often create ad hoc solutions also known as "crutches". These workarounds don't look nice and often ruin the mood of a creative programmer who needs to go to a beer pub to cope with the cognitive dissonance :)

    Indeed, while a person has a typical field of view and a map with a certain scale, the programmer has only one given point and a huge number of other points (i.e. billions of stars). This multitude of points gets a lot of incoming requests, including the write requests, not just read ones. You can write a perfect query in SQL, however, the real-world query execution plan will be depressingly long. To find the closest neighbor, you will have to read the entire table, compute all the distances from the given point and return the given number of good enough results. Indexing doesn't help in this case, as you will have to fully scan the search tree and read the entire table in random order. This will take much longer than simple table reading. In reality, tasks, where you need to efficiently find nearest neighbors, aren't limited to spatial search. It can also be used for classification tasks, finding typos, data clustering, and deduplication. All such tasks will benefit from efficient nearest neighbor search in DBMSs that are now a de facto standard for storing the data. What do we mean by "efficient search"? It means that our search is fast, concurrent, scalable, and supports various data types (most likely, non-standard ones). We implemented such KNN search in PostgreSQL 11 years ago. I will cover its implementation, today's state and share some use cases for KNN.

  • Ivan Muratov
    Ivan Muratov ООО "Первая Мониторинговая Компания"
    45 мин

    TimescaleDB 2.0 - Time-series data in TimescaleDB distributed cluster on top of PostgreSQL ORDBMS.

    TimescaleDB extension allows to turn good old Postgres into a real distributed cluster for storing time series data while maintaining the relational model, convenient SQL and a time-tested ecosystem. And additional features such as continuous materialized views and data compression allow to build truly powerful telematic hubs.

  • Andrey Zubkov
    Andrey Zubkov Postgres Professional
    45 мин

    New features of pg_profile/pgpro_pwr - historical workload profiler

    This talk is about postgres extension pg_profile - simple historic database workload profiler. I'll talk about its new features and new statistics available in Postgres 14. There is a branch of pg_profile named pgpro_pwr, designed to run in PostgresPro Enterprise Edition and PostgresPro Standard Edition databases. It is using extended performance statistics of those databases providing some valuable benefits. We'll see what's new in pgpro_pwr extension. Also I'll talk about some problems in postgres performance monitoring and their possible solutions in the future.

  • Sergey Pronin
    Sergey Pronin Percona
    45 мин

    PostgreSQL Disaster Recovery in Kubernetes

    Running PostgreSQL in Kubernetes became a common theme with the rise of the operators. In this talk, we will show you how easy it is to deploy a highly available PostgreSQL cluster in Kubernetes with Operator. We will also show how to prepare for the failure of a data center by running a hot-standby cluster on a DR site.