title

text

Kirill Borovikov
Kirill Borovikov ООО "Компания "Тензор"
10:00 05 February
45 мин

Plan + query = ?.. Finding pleasure in analyzing query plans

Odd things in query plan analysis - wasted time and "unnecessary" buffers.
Structural hints in a plan. How to help a developer with optimization without writing a single line of code. How to match plan nodes with query text and take advantage of this information.

Слайды

Видео

Другие доклады

  • Álvaro Hernández
    Álvaro Hernández OnGres
    180 мин

    Kubernetes crash course for Postgres DBAs

    Kubernetes is the new way of deploying software, programmatically, on almost any infrastructure (be it cloud or on-prem). But is a complex beast. How to get started? How to dive deeper? What are the specific best-practices and special hints for Postgres DBAs dealing with Kubernetes? Join this half-day tutorial to learn, practically, among other topics:

    • How to quickly get started with Kubernetes
    • Manage storage
    • Manage services, networking and ingress/egress
    • How to make Postgres cloud-native in Kubernetes
    • Do a show-run of existing Postgres operators, including Zalando, CrunchyData and StackGres.

    This tutorial is very practical. BYOL! (Bring Your Own Laptop). With Kubernetes installed! (check microk8s, minikube or k3s if you don’t have any installed.

  • Семен Трошкин
    Семен Трошкин Мазар АО
    22 мин

    High availability PostgreSQL cluster under the control of the Patroni for 1С. Single entry point is organized by Consul DNS on Windows

    200 bases, several clusters, several terabytes of data Share our experience setting up and using patroni cluster DBMS Cluster on Linux, 1C server for windows. We use: PostgreSQL assembly for 1C, Patroni, Consul, Consul dns, Commvault, Ansible Vagrant file and Ansible playbook with roles attached.

  • Sangwook (Shawn) Kim
    Sangwook (Shawn) Kim Apposha
    45 мин

    Make Your PostgreSQL 10x Faster on Cloud in Minutes

    Cloud storage has some unique characteristics compared to traditional storage mainly because it is virtualized and controlled by software. One example is that AWS EBS shows higher throughput with larger I/O size up to 256 KiB without hurting latency. Hence, a user can get only about 4 MiB/sec with 1,000 IOPS EBS volume if the I/O request size is 4 KiB, whereas a user can get about 250 MiB/sec if the I/O request size is 256 KiB. This is because EBS consumes one I/O in a given IOPS budget for every I/O request regardless of the I/O size (up to 256 KiB). Unfortunately, PostgreSQL cannot exploit the full potential of cloud storage because PostgreSQL has designed without considering the unique characteristics of cloud storage.

    In this talk, I will introduce the AppOS extension that improves the throughput of a write-intensive workload by 10x by transparently making PostgreSQL cloud storage-native. AppOS works like a storage driver that efficiently exploits the characteristics of cloud storage, such as I/O size dependency to storage throughput and latency, atomic write support in cloud block storage, and fast, but non-durable local SSDs. To do this, AppOS comprises a Linux-compatible file I/O stack including virtual file system, page cache, block I/O layer, cloud storage driver. On top of the file I/O stack, syscall module supports registering pre- and post-handler for file I/O-related system calls in order to transparently work without modifying PostgreSQL codes.

    I will focus on presenting key use cases and performance results of the AppOS extension after explaining the internals. Specifically, I will show the performance results of OLTP and some batch workloads using standard benchmarking tools like pgbench and sysbench. I will also present performance results and implications on multiple clouds including AWS, GCP, and Azure.

  • Антон Нечеухин
    Антон Нечеухин Miro
    90 мин

    Tool as code: testing Postgres

    At the master class, we will learn how to execute fast load tests of Postgres databases: optimizing database configurations, data structure, indexes, OS settings, etc. To do this, we will create a code, build the infrastructure for the test from it and will do the test. As a result, we get a flexible tool in the code to which you can attach any monitoring, and for which you don't have to pay a lot of money, because the environment is created in 7 minutes in an empty AWS account and destroyed after test