title

text

February 04 – 06 , 2019

PgConf.Russia 2019

PgConf.Russia 2019

PGConf.Russia is a leading Russian PostgreSQL international conference, annually taking together more than 500 PostgreSQL professionals from Russia and other countries — core and software developers, DBAs and IT-managers. The 3-day program includes training workshops presented by leading PostgreSQL experts, more than 40 talks, panel discussions and a lightning talk session.

Thems

  • PostgreSQL at the cutting edge of technology: big data, internet of things, blockchain
  • New features in PostgreSQL and around: PostgreSQL ecosystem development
  • PostgreSQL in business software applications: system architecture, migration issues and operating experience
  • Integration of PostgreSQL to 1C, GIS and other software application systems.
  • more than
    0 participants
  • 0 speakers
  • 0
    minutes of conversation
  • 63 talks
  • offline
    format

Talks

Talks archive

PgConf.Russia 2019
  • Andrey Fefelov
    Andrey Fefelov Mastery.pro

    Patroni is getting art of state standard framework for building HA clusters with postgres now.

    During session we will build simple 3 node cluster using mentioned stack.

    We will discuss patroni's architecture, and most interesting parameters from it's configuration. We will check how actually failover works and how could you initialise cluster.

    After session you will be able to built such cluster from scratch in minutes using given ansible playbooks.

  • Vadim Podolny
    Vadim Podolny АО "РАСУ"

    This talk will represent a new platform of Distributed Control System for Nuclear Power Plant operation. Participants will learn about control system for very complicated automation objects. In a hard real time node more than 150 special subsystems are operating in order to control various technological processes of nuclear power plant (NPP), such as reactor control system for more than 1000 MW power unit with a turbine weighing more than 2000 tons. More than 100K of data gained from sensors are resulting in up to 500K of parameters representing 5 branches of physical processes: neutron kinetics, hydrodynamics, chemistry and radiochemistry, and physics of strength. Deviations may cause the whole system to become a huge DDoS source made of useful diagnostic information which is always much larger than the network and hardware are capable to manage. This may lead to normal operation failure. The talk will reveal the approaches to solve the issue.

    You will learn about hardware and software architecture of such systems, about backup and replication, data redundancy and technological diversity. How to manage high loads, what is QoS, and what will happen in case of normal operation system failure, as for example was at Fukushima. But, hey, there should be a talk about coding! So, no SSD and HDDs, only InMemory, data structures from tens of millions of elements, and forget about processor cache as it does not work. Imagine your newest 4-generation Xeon has lost all the advantages and turned into a "pumpkin", so let's roll up your sleeves and examine timings, synchronicity, and try to make the most of your hardware, discovering the weakest link from processor, operating system and a network.

  • Aleksander Kuzmenkov
    Aleksander Kuzmenkov PostgresPro

    A major responsibility of a database engine is to convert a declarative SQL query to an efficient execution plan, employing various methods to scan and join the relations. There is always a development effort to improve this area. What clever execution plans can PostgreSQL generate, what's new in version 11 and what is in development? To name a few things, the joins are optimized by removing unneeded outer and inner joins, and reducing joins from outer and semi to inner. There is work to enable merge joins on inequality and range overlap, and to improve join selectivity estimates with multi-column statistics. When it comes to scanning a single relation, covering indexes allow to use index-only scans more often. Incremental sort and more precise estimation of sorting costs help generate better paths when sorted output is required, e.g. when using GROUP BY and ORDER BY or performing merge joins. This talk aims to give an overview of such optimizations that already exist and that are being developed now.

  • Tatsuro Yamada
    Tatsuro Yamada NTT Comware

    As is often seen in OLAP and batch processing workloads, the more complex a query (containing many joins, filters, aggregates), the more there is a possibility of row count estimation errors, which leads to planner choosing an inefficient execution plan.

    To address that problem, I developed a tool called pg_plan_advsr as a PostgreSQL extension, which corrects the estimation errors by repeatedly feeding back the information collected during query execution to the planner.

    The tool has three features:

    1. Automatic plan tuning by repeatedly feeding execution information to planner
    2. Preserve all plans generated during plan tuning in a history table
    3. Create and store optimizer hints to be able to reproduce plans generated during tuning process

    I verified the effectiveness of pg_plan_advsr by enabling it when running the join order benchmark (JOB) against PG 10.4 and observed its execution time shortening to 50% of the original. Therefore, it is useful for user who would like to do plan tuning for OLAP and batch processing.

    I will talk about the following things in this presentation:

    • Principles behind pg_plan_advsr and its architecture
    • Detailed information about the measurements done with JOB
    • Possible future enhancements
    • Using aqo and pg_plan_advsr together (experimental)

All talks

Partners

PgConf.Russia 2019

Organizational

Informational

Technical

Partner