title

text

Valery Kosarev
Valery Kosarev -
15:30 06 February
22 мин

Pluggable storage for large objects

Storing binary data in database tables is sometimes a good solution for a particular project. But sometimes, due to changes in conditions or insufficient consideration of decisions, such storage is becoming a real nightmare. If there is an understanding of how and where to place these data, the transition to the new solutions are often very hard, often require modification in the application code and downtime the system for migration. The presentation is a particular solution of such problems. Our extension allows to move binary data from database to the storage Ceph and not only. And does it seamless for the applications.

Материалы к докладу

Слайды

Видео

Другие доклады

  • Vadim Yatsenko
    Vadim Yatsenko Tantor Lab
    Sergei Kim
    Sergei Kim Ingram Micro Cloud
    45 мин

    PostgreSQL High Availability cluster for Enterprise

    Lately, the PostgreSQL is more widely being used for Enterprise. Our company, Ingram Micro Cloud, is one of the first companies that did it. We have been using PostgreSQL for many years as the main DBMS for our products. In our report, we want to tell about the evolution of our High Availability (HA) PostgreSQL cluster. We will tell about how quickly we implemented the solution using pgpool-II, wrote failover scripts, tested Postgres-XL, and came up with unusual configurations of Stolon. We will also cover the problems of load balancing, pooling connections, and backups.

  • David Fetter
    David Fetter PostgreSQL Global Development Group
    45 мин

    Transition tables!

    Transition tables, a new feature in PostgreSQL 10, offer broad new capabilities including new ways to maintain materialized views. At the end of this talk, you will have seen new ways to use this feature and have it in your tool chest for the future.

  • W
    Wiktor Brodło Adjust GmbH
    45 мин

    Bagger: How we migrated 1 PB of data from Elasticsearch into PostgreSQL

    In this talk, I will tell you the story of how a bunch of sysadmins got sick of having to resuscitate their petabyte-sized Elasticsearch cluster and decided to replace it with some tried technologies: PostgreSQL, Kafka, a bit of Redis, lots of glue, and the typical sysadmin stubbornness. The result is Bagger: the sysadmin answer to Big Data. A fast, fairly reliable, fault-tolerant store, used mostly for logging timestamped events for some amount of time. Bagger is named the Bagger series of bucket-wheel excavators, feats of German engineering and some of the largest land vehicles ever produced by man. Just like the excavators that dig through tons of material, our Bagger digs through tons data.

  • Mikhail Tyurin
    Mikhail Tyurin ИТ предприниматель
    Konstantin Evteev
    Konstantin Evteev X5 FoodTech
    45 мин

    Recovery use cases for Logical Replication in PostgreSQL 10

    Avito is the biggest classified site of Russia, and the third largest classified site in the world (after Craigslist of USA and 58.com of China). In Avito, ads are stored in PostgreSQL databases. At the same time, for many years already the logical replication is actively used. With its help, the following issues are successfully solved: the growth of data volume and growth of number of requests to it, the scaling and distribution of the load, the delivery of data to the DWH and the search subsystems, inter-base and internetwork data synchronization etc. But nothing happens "for free" - at the output we have a complex distributed system. Hardware failures can happen - it is natural - you need to be always ready for it. There is plenty of samples of logical replication configuration and lots of success stories about using it. But with all this documentation there is nothing about samples of the recovery after crashes and data corruptions, moreover there are no ready-made tools for it. Over the years of constantly using PgQ replication, we have gained extensive experience, rethought a lot, implemented our own add-ins and extensions to restore and synchronize data after crashes in distributed data processing systems. In this report, we would like to show how our experience can be shifted to a new logical replication subsystem in 10th version of PostgreSQL. In the current implementation, these are only non-trivial solutions - there is a number of issues for the community, that come down to implementing simple recovery mechanisms - as simple as configuring the replication in 10th version.