PgConf.Russia 2017 Conference Program

Panel discussion



ProBackup: fast, reliable, and incremental

Egor Rogov
Postgres Professional

Modern state of backup tools for PostgreSQL certainly leaves some room for improvements. Standard utilities provides only basic functionality, third-party tools solve some, but not all, problems. To take backups reliably, fast, and incrementally on page level, one needs not only a backup tool, but also some support from the database server. On this tutorial we will talk about our new backup and recovery manager ProBackup, and will show it in action.

Demonstration script:


Ivan Panchenko
Postgres Professional

Ivan Panchenko graduated from the Physical Department of Moscow State University. He got a PhD degree in Physics and Mathematics in 1997 year. Since 1996 year he has been working on development of sophisticated high performance systems for business. He has been practicing PostgreSQL since 1998 year. Ivan was a leader of content projects development in Rambler, director of IT Development in Stack Group, technical director of social network "Moy Mir". He is a developer of Rambler Media,, Higher School of Economics internet portals and other information systems for business.

This tutorial is about various applied JSON usage patterns and the related PostgreSQL functionality. We will discuss data storage in the JSON format, retrieving, changing, and searching such data, JSON features for simple SQL queries, as well as using JSON in stored procedures in different languages. You’ll get hands-on experience with some of the discussed problems on the provided virtual machines.

Tools for server code debugging, profiling, and tracing

Pavel Luzanov
Postgres Professional

Debugging, profiling, and tracing of the executed commands play an important role in development of any applications. This is also true for developing stored procedures in DBMS.

PostgreSQL offers various tools (both built-in and external) for these purposes.

In this talk, we will provide an overview of the available tools and their advantages and disadvantages, as well as a detailed demo of their use cases.

This talk is a part of a basic course for backend application developers (DEV1), which “Postgres Professional” company plans to announce in the near future.

Researching GIS data with PostGIS and adjacent toolset

Dmitry Lebedev

Master degree as Economist-mathematician (Ec. dept Novosibirsk State University), formerly full stack web developer, data analyst in retail, urban activist, GIS data researcher.

Nowadays one can make a decent urban research based simply on public datasets, making interesting and unexpected insights. In the presentation, I'll show examples of these calculations in PostGIS, the industry standard de-facto.

But just PostGIS is not enough. You need tools to import, verify and visualize the data. It's critically important to visualize the data live, to debug your calculations and shorten iterations. I'll describe all these steps:

  1. Collecting the data: public API, OpenStreetMap; direct user input.
  2. 3rd party APIs for calculations.
  3. Visualization of GIS and other sorts of data: QGIS, Matplotlib, Zeppelin integrated with PostGIS.
  4. Debugging the calculations: live visualization (Arc, QGIS, NextGIS Web)
  5. Scripting and minimizing the chores: Makefile, Gulp

Mamonsu – a Swiss Army knife for managing and monitoring PostgreSQL

Dmitry Vasilyev
Postgres Professional

Инженер СУБД. Начинал карьеру с разработчика, последнее время работал эксплуататором по методологии devops. Принимал участие в создании платформы для работы и вещания: Веб-выборы, ПМЭФ 2013-2015, ЕГЭ 2015.

This tutorial shows how to properly monitor PostgreSQL. We will discuss the mamonsu utility, see how to configure it, examine its hidden features and learn how to extend them.

PostgreSQL & Java Tutorial

Alvaro Hernandez

Álvaro is a 38 year-old IT entrepreneur, based in Madrid, Spain. Founder and CEO at 8Kdata (, a database R&D company, he spends most of his time working on the ToroDB ( project, the first NoSQL-on-SQL database, a MongoDB-compatible database that runs on top of PostgreSQL. He is a passionate software developer and open source advocate. Álvaro is a Java software developer, member of, but also a DBA, trainer and frequent speaker at international database conferences. He also founded the PostgreSQL Spanish User Group (www.postgrespañ, one of the largest PUGs in the world, with more than 700 members.

Java is one of the most used languages when programming with PostgreSQL databases. Join this tutorial to learn or review the techniques to connect to postgres, best programming practices with JDBC, and to explore jOOQ, a mapper software that allows you to use the full power of SQL and postgres advanced query features while avoiding all the boilerplate code.

This tutorial is very practical: most of the time will be dedicated to iterate through code samples. It will cover:

  • Introduction to Java and PostgreSQL
  • Ways of connecting to PostgreSQL from Java (not only JDBC!)
  • Introduction to JDBC. JDBC types. PostgreSQL JDBC
  • Code demo: JDBC with PostgreSQL. From Java 1.4 to Java 8, best practices and code samples
  • Code demo: jOOQ, a great mapper for PostgreSQL
  • Java inside PostgreSQL
  • The future of Java and PostgreSQL

About two-thirds of the tutorial will be dedicated to iterate over code samples and demos. All the code would be available from public open-source repositories and built with maven, so that any attendee may download it and build easily to play with it during the tutorial (although not required).


Part 1

Part 2

Part 3

Part 4

Full text search capabilities in PostgreSQL

Arthur Zakirov
Postgres Professional

Arthur is a C programmer and PostgreSQL contributor. Presently works on improvment of full-text search features of PostgreSQL.

Teodor Sigaev
Postgres Professional

Teodor is one of the Russian PostgreSQL Major Contributors. Takes part in PostgreSQL development since 2000. Author of full-text search, GiST (together with Oleg Bartnunov) and other core features and extensions.

Full text search in PostgreSQL is probably the most advanced one among relational DBMS. This tutorial will explain how to setup full text search configurations and dictionaries and how to build a ful text search system using an example of a simple popular science web site, with demonstration of various ranking functions. Also I will tell about new RUM index, which allows to accelerate execution of some kinds of full text queries and implements a new improved ranking function.

Skala-SR / Postgres Pro — preconfigured high-performance database engines

Igor Vedyohin

Igor graduated from Economic Department of the Moscow State University of Civil Engineering in 1998 in specialty of engineer-economist. He has been working in IBS since 2001. In 2013 year he took the position of the Deputy Director General and headed the division of IT-infrastructure which covers issues of company's information environment funcionality and development, as well as information security.

The History of Postgres

Peter van Hardenberg
Ink and Switch

Peter van Hardenberg was a founding member of the Heroku Postgres team which exists to make databases accessible to application developers at any scale. He's built products, systems, and teams during his career which has ranged from collecting research data offshore in the Arctic Ocean, writing physics engines for games on Nintendo's 3DS console, and helping to make developers more productive at Heroku. Peter was born in Canada, but lives in San Francisco with his family.

The PostgreSQL community is over 20 years old, but the history of PostgreSQL dates back even farther. In this talk, we'll learn about the roots of the Postgres project, learn about some of the people who contribute to it, study how it has changed over time, and pay special attention to the many contributions of Russian people.

Connection pooling at any scale

Vladimir Borodin

Vladimir is a system administrator in the department of data storage systems of Yandex which he joined in 2012. He works with everything related to the data storage, mainly with metadata storage recently.

It's not a secret that PostgreSQL connections are expensive so you should save them. To solve this problem there are PgPool-II and PgBouncer for quite a long time. At Yandex tens of thousands of connections to a single database is not a surprise so we use pgbouncer since time immemorial. This talk gives an overview of problems we faced and ways to solve them.


Migrating off of MongoDB to PostgreSQL

Alvaro Hernandez

Álvaro is a 38 year-old IT entrepreneur, based in Madrid, Spain. Founder and CEO at 8Kdata (, a database R&D company, he spends most of his time working on the ToroDB ( project, the first NoSQL-on-SQL database, a MongoDB-compatible database that runs on top of PostgreSQL. He is a passionate software developer and open source advocate. Álvaro is a Java software developer, member of, but also a DBA, trainer and frequent speaker at international database conferences. He also founded the PostgreSQL Spanish User Group (www.postgrespañ, one of the largest PUGs in the world, with more than 700 members.

MongoDB is a successful database in the NoSQL space, mostly used for OLTP-type workloads. However, due to the lack of ACID (transactions in particular) and significant performance issues with OLAP/DW workloads, more and more MongoDB users are considering migrating off of MongoDB to a RDBMS, where PostgreSQL is the usual choice. This represents a significant opportunity for the PostgreSQL ecosystem, to "bring NoSQL to SQL". This talk will present the challenges that MongoDB users are facing and the state of the art of the available tools and open source solutions available to perform ETL and live migrations to PostgreSQL. In particular, ToroDB Stampede will be discussed, an open source solution that replicates live from MongoDB, transform JSON documents into relational tables, and stores the data in PostgreSQL.


RUM indexes and their applications

Alexander Korotkov
Postgres Pro

Alexander has a status of PostgreSQL Major Contributor. He graduated from the MEPhI with honours. Alexander defended PhD thesis based on his impact in PostgreSQL development, specifically enhancement in fuzzy search of strings. One of his greatest achievement in the open-source community is development of the infrastructure for index access method extensibility. Alexander also improved GiST and GIN indexes, optimized multicore architechture, statistics and selectivity assessment. Three times he was a mentor in GSoC program .

I want to present a new custom access method, which extends the current GIN capabilities using additional information stored in posting tree/list. For example, positional information as an additional information allows new AM returns results in relevance order, which could considerably improve execution time of full text queries.

Troubleshooting PostgreSQL Streaming Replication

Alexey Lesovsky
Data Egret

Alexey Lesovsky is a PostgreSQL DBA and Linux system administrator at the PostgreSQL consultancy company. He has over ten years of experience working in Linux administration and KVM virtualization. Last five years he works with RDBMS and particularly PostgreSQL. He has huge experience in troubleshooting, monitoring and Postgres internals. He every day works with PostgreSQL resolving issues that users are faced with on daily basis. In his spare time Alexey does C programming and creates Ansible playbooks.

Streaming replication has been introduced in 2010 and quickly became one of the most popular features of PostgreSQL. Today, it is hard to imagine PostgreSQL installation without streaming replication. With its stability, high efficiency and ease of configuration one would have thought it is an optimal feature. However, while using it you might sometimes enter murky waters. This often can be resolved by using a combination of built-in and third party troubleshooting tools. In my talk I will provide an overview of these tools and explain how with their help one can diagnose, understand and eliminate problems related to streaming replication. I will also go through the most frequent issues occurring when streaming replication is used and will propose possible solutions. This talk is primarily aimed at DBAs and system administrators who use PostgreSQL in their day-to-day.


PostgreSQL introduction for Oracle DBA

Aleš Zelený
Česká spořitelna a.s.

I’ve started my relation with databases 1996 with Borland Interbase, later on the open source version Firebird contributing to database and application development of Open Public Access Catalog used by Municipal library of Prague. From 2000 till now I work for Czech savings bank starting on Oracle 7.3 to present 12c version. At 2012 I was on my first Prague PostgreSQL Developers Day which stared my fascination by PostgreSQL. Starting 2013 we are using PostgreSQL for daily business same way as Oracle database and SQL Server.

This talk is intended for Oracle DBAs whose are considering to start using PostgreSQL as another RDBMS engine in their DB portfolio or for PostgreSQL DBAs, who wants to learn some of similarities and differences between Oracle and PostgreSQL. This talk is not intended to be a migration guideline neither advocating one platform over another. General database structure overview for both platforms will be compared in terms of memory configuration parameters; logical to physical structure as well as we’ll touch some security point of view and backup & recovery tools available. Oracle container databases are not covered. The talk want’s to help create a bridge for DBA and avoid some frustrations at the first Oracle DBA step to discover, learn and start using PostgreSQL as reliable fully functional database.


Buit-in Sharding update and future

Masahiko Sawada
NTT OSS Center

Masahiko Sawada is a developer at NTT in Japan, and providing technical support. His primary interests are transaction, replication and patch review.

Database sharding enables a distribution of the database over a large number of machines, greatly improving performance. With the advent of Foreign Data Wrappers (FDW), it's possible to consider a database sharding in PostgreSQL with acceptable level of code changes using FDW. We've been working on enhancing around FDW infrastructure such as foreign table inheritance and pushing down so that PostgreSQL can execute the distributed query efficiently using FDW. In this talk, I'll cover what FDW-based sharding is and what use-cases it can cover. And then I'll demonstrate how to build sharding and describe our achievement of a FDW-based sharding in PostgreSQL community. Finally, I'll describe further enhancements to FDW such as Async Execution and Distributed Transaction Support.

How Tencent uses Postgres-XC for their high volume WeChat payment system


Jasonysli, from Tencent Company, the biggest Internet Company of China.He is the Chief PostgreSQL Architect of Tencent Cloud Service. Team Manager of Tencent PostgreSQL Team which has several experienced software development engineers. Worked in Huawei about postgresql from 2009 to 2013. Developped internal version of PG in Huawei. From 2014 to now, working in Tencent with pgxc. Added new feature to private version of PGXC. His team now is running the biggest PGXC cluster in the world, which contains 29 datanodes and 6 coordinators.

Tencent, based in China, is one of the world's largest companies in the social networking space. This talk discusses how Tencent modified the code of Postgres-XC to meet their internal payment system requirements

Отказоустойчивый PostgreSQL кластер с Patroni

Alexander Kukushkin
Zalando SE

Alexander is a main contributor to the Patroni project. At Zalando SE he is responsible for creating and maintaining of hundreds PostgreSQL clusters with sizes from a few Megabytes up to 10 Terabytes.

In the modern world, an increasing number of IT companies are moving their resources to the cloud and Zalando is not an exception. A rapid growth our company is experiencing along with an adoption of microservices were the main driving forces behind the changes introduced into the deployment procedure of new PostgreSQL clusters and the solution of the automatic failover problem. The majority of existing solutions for automatic failover require manual configuration of every cluster instance and complicates provisioning new clusters and new nodes into existing cluster.

Partitioning with pg_pathman

Ildar Musin
Postgres Professional

Ildar graduated from the Ufa State Aviation Technical University with a degree in mathematics and programming. He has been involved in the developmet of automation system for microfinance service, enterprise document management systems. Since 2015 he has been working in Postgres Professional company, dealing with development of sectioning functionality of PostgreSQL. Ildar is a developer and co-author of the pg_pathman extension for PostgreSQL DBMS.

Dmitry Ivanov
Postgres Professional

Dmitry works at Postgres Professional company. He is engaged in development of pg_pathman extension for PostgreSQL / Postgres Pro which provides support of fast and effective table sectioning.

Partitioning is a long-awaited feature in PostgreSQL. Although Postgres supports partitioning via inheritance, this approach has some disadvantages, such as the need to manually create partitions and support triggers, significant planning overhead, and no query execution optimizations. In this talk, we’ll tell you about the pg_pathman extension we are developing. pg_pathman supports HASH and RANGE partitioning, performs planning and execution optimizations, supports fast insert by using Custom Node instead of triggers, provides functions for partition management (add, split, merge, etc.), supports FDW, non-blocking data migration, and more. We'll also speak about pg_pathman integration with Postgres Pro Enterprise Edition and Oracle-like syntax support for partitioning. Finally, we'll discuss new partitioning capabilities in PostgreSQL 10, the already implemented features and further development plans.


Relocatable tables in PostgresPro

Ivan Frolkov
Postgres Professional

Ivan started his career as a professional programmer in 1993, dealing with PL/I, COBOL and IT-development of voucher privatization. Further he worked mainly with web-development and business applications. His professional interests include DBMS and messaging systems.

With big data threads, even the upload of data to a database can often be problematic – apart from the data upload itself, you need to create indexes, perform VACUUM after the upload for correct Index-only scans, etc. From this talk, you will learn how to avoid most of such problems (if not all of them).


Automation of migration to PostgreSQL from various databases

Dmitry Susha
Ispirer Systems Ltd.

Dmitry graduated from Mechanics and Mathematics Faculty of the Belarusian State University. At the moment he is the Head of Database Migration and Modernization Department at Ispirer Systems, where he has been working for more than 9 years. The company successfully implements database and application migration and modernization projects since 1999. Unique product for automatic migration Ispirer MnMTK allows to execute the transfer between major platforms quickly and efficiently. Migration to PostgreSQL is one of the priority areas.

The report is focused on the topic of automation of migration to PostgreSQL from other databases using Ispirer Migration and Modernization Toolkit. The issues of data and SQL code migration, conversion of client applications, embedded SQL and database API will be covered, the examples of implemented projects of migration from Oracle to PostgreSQL and from Microsoft SQL Server to PostgreSQL will be given.


Building high-performance scale-out Database Solutions

Michael Kagan
Mellanox Technologies

Michael Kagan is a co-founder and CTO of Mellanox Technologies. Previously, from 1983 to 1999 Mr. Kagan held key architecture and design positions at Intel Corporation. While at Intel Corporation Mr. Kagan was responsible for Pentium MMX design and managed the architecture team of the Basic PC product group. Mr. Kagan holds a Bachelor of Science in Electrical Engineering from the Technion — Israel Institute of Technology.

Modern high-performance and scale-out Database solutions need to move huge amounts of data between the system components. They need to perform it quickly and without delay for the end-user. While developing interconnect technologies Mellanox focuses on highest bandwidth, ultra-low latency, and advanced CPU offload capabilities. In this session Michael will tell about fast interconnect value for Database and Big Data solutions and will share Mellanox experience in building such systems in cooperation with leading global vendors.


Interconnect on steroids for the Skala-SR / Postgres Pro database engine

Andrei Nikolayenko

Andrei is an architect at IBS. He has been working on designing of serial infrastructural and platform complexes on the basis of Russian equipment and software. Before that he had implemented software applications for power engineering at Oracle, had been a header of developers department at CBOSS - jointly obtained 20 years of experience in development and implementation of large IT systems for the government, energy, and telecommunications.

Alexander Korotkov
Postgres Pro

Alexander has a status of PostgreSQL Major Contributor. He graduated from the MEPhI with honours. Alexander defended PhD thesis based on his impact in PostgreSQL development, specifically enhancement in fuzzy search of strings. One of his greatest achievement in the open-source community is development of the infrastructure for index access method extensibility. Alexander also improved GiST and GIN indexes, optimized multicore architechture, statistics and selectivity assessment. Three times he was a mentor in GSoC program .

Борис Нейман

Using 1C platform with Postgres

Grigory Smolkin
Postgres Professional

Debugging 1С code in PostgreSQL DBMS

Dmitry Yuhtimovsky

Leader of the technical support team of, lecturer of the unique course on performance of 1С:Enterprise for system administrators. Dmitry works with high-load 1С-projects (about 3000-5000 uders in one database). He successfully implemented the projects of Corporate Technology Support Centers jointly with 1C company. Currently he deals with migration of online tools from MS SQL Server to Postgres Pro.

  1. 1C:Enterprise 8 and PostgreSQL 9 interoperability 1.1 Changes in new 1C platform versions 1.2 v81c_data and v81c_index schemas 1.3 Sending 1C queries to SQL 1.4 Using 1C technological log events for PostgreSQL diagnostics
  2. Analyzing queries that affect PostgreSQL performance 2.1 A free tool for automating log parsing 2.2 Pareto principle in action 2.3 Installation and configuration of the tool 2.4 A case study of query optimization 2.4.1 An issue in a PostgreSQL query 2.4.2 Finding non-optimal operations in a query 2.4.3 Resolving inefficiencies
  3. PostgreSQL statistics for performance diagnostics 3.1 Comparing Postgres with MS SQL Server 3.2 Troubleshooting locks 3.3 Operating load diagnostics 4 Case studies by the team

Postgres as BI platform core, features, practical experience

Andrey Fefelov

Started to work with database since 2006, began with Oracle, but later on fallen in love with Postgres. Was CTO at, Now is engaged in development of BI platform based on Postgres for largest pharmaceutical selling group in Ireland.

I will tell you about why Postgres is first-choice product as a foundation for your BI system with classical OLAP workload. Briefly it will be said about existing open source BI solutions.

I will also describe specific of our architecture, why we chose snowflake scheme and how we are doing extract, transformation and load procedures. It will be mentioned about special Postgres tuning for OLAP and massive data bulkload workloads. Also I will let you know about Postgres usage as a column database with cstore_fdw by Citus and results achieved. Cons and problems of our approach will be described in the end of the talk.


Very large tables in PostgreSQL. Or how to make 60+ Tb to 10+ Tb

Vadim Aleksandrovich Yatsenko
Progress Soft

My acquaintance with PostgreSQL happened in 2008. Since 2011, I am developing a database to PostgreSQL. At present I am the chief of the Department of storage systems to the company "Progress Soft". Our specialization - the development highly loaded large projects, using PostgreSQL as the primary database.

The talk will describe how we have implemented storage of large tables (+1 billion rows per day). The project exists in production 2 years. The total amount of data - 300 Tb (25 PostgreSQL servers * 2 Data Center). I'll tell about mistakes in organization of large tables storage in the initial phase of the project, and how these mistakes were corrected. I'll also talk about how to organize the data rotation and archiving. I voiced questions about what we were missing in PostgreSQL 9.4 out of what appeared in the 9.5 and 9.6. And also, what new features we are waiting for new releases of PostgreSQL.

Optimized use of PostgreSQL in real world production

Roland Sonnenschein
Hesotech GmbH

Roland develops solutions to control and document technical processes since more than 25 years. PostgreSQL is the backbone for all these applications. Currently he is working on "Industry 4.0" and on an optical camera based system to detect landslides.

Often it is enforced by the customer or even by law, to document the circumstances of the production of parts or lots. This talk is about the automated generation and storage of the corresponding administrative data and their correlation to measurements. Administrative data are entities like suppliers batch, article, serial number or production date. They are often to be exchanged with ERP systems.

PostgreSQL – a real alternative for high-load 1C-based systems

Anton Doroshkevich

Anton is a Chief Information Officer at the InfoSoft company settled in Novosibirsk. He has been working in the area of infrastracture, 1С servers and databases maintenance for 12 years. For the last 4 years he has been engaged in the promotion of 1C in Siberia and the Far East regions. Since 2015 year he has been actively promoting migration on PostgreSQL in 1C and has implemented several major projects.

  1. Why is it time to consider switching to PostgreSQL?
  2. 1С and PostgreSQL interoperability.
  3. Why should we start with Windows?
  4. The first experience of migrating big enough 1С databases to Postgres.
  5. Large-scale projects on high-load systems, in figures.

Database platform architecture and administrating PostgreSQL in Skype

Aleksei Plotnikov

Since 2010 I have been working as a Service Engineer (DBA) in Skype Database Platform team, maintaining and developing Skype's large PostgreSQL installation with primary focus on high availability, security and performance.

Most of the main Skype services use a database platform based on PostgreSQL and other open-source technologies, such as Skytools, plProxy, pgBouncer, etc. This platform consists of several hundreds of servers with thousands of databases, which process hundreds of thousands of transactions per second. At the same time, the platform architecture allows its users (applications and their developers) to work with "logical" databases, without any worries about their real “physical” structure.

Our Skype Database Platform team is responsible for the database platform infrastructure. We develop automation systems for various processes that help us ensure service reliability and facilitate development, testing, and deployment of code. In this presentation, I will outline the database platform architecture, review its main components, and tell you about the methods we use in our every-day work to ensure high availability, scalability, replication, fault tolerance, and more.

PostGIS for moving targets

Darafei Praliaskouski

Darafei is a specialist in OpenStreetMap and open GIS-platforms. He is a member of Minsk Linux Users Group and OpenStreetMap Foundation. Darafei has been working on rendering, data and image processing. Currently he is a seniour GIS Engineer at Juno company.

PostGIS enables spatial processing in Postgres. Usually PostGIS is used for planar euclidean calculations, but real world adds time, ambiguity and uncertainty. We'll have a look on making a thousand of cars move smoothly in real time on OpenStreetMap base map.


Voskhod PostgreSQL on Elbrus

Igor Chizhevskiy
Scientific Research Institute "Voskhod"

Igor graduated from National Research Nuclear University MEPhI, Department of Informatics and management processes. Since 2006 year he has been working at Scientific Research Institute "Voskhod" on the State system of manufacturing, issuance and control of passport and visa documents of new generation, reformed subsequently in the State system of migration registration and accounting. Igor was involved in most stages of system development and enhancement, from the very first issuances of passports before the import substitution program. Presently he is focused on system architecture maintenance and development.

Dmitry Pogibenko
FGBU "NII Voskhod"

Dmitry graduated from Kazan National Research Technical University named after A.N. Tupolev, department of information security systems. He has been a developer of information systems for retailers, access control and ticket checking in sports facilities, electronic payment solutions, took part in universal electronic card project. Since 2015 year Dmitry has been working at FGBU "NII Voskhod". Presently he is engaged in development of system for MIR database migration from DB2 to PostgreSQL in the framework of import substitution program.

Stanislav Merzlyakov
FGBU "NII Voskhod"

Stanislav graduated from the Moscow Institute of Physics and Technology (State University). Since 2005 year he has been working at FGBU "NII Voskhod" on the State system of manufacturing, issuance and control of passport and visa documents of new generation. He was involved in most stages of system development, since the first model creation to the full scale operation, including updating the stack of technology used in PVDNP to keep the system up to date. Currently he works on identification of application bottlenecks for the successful system migration on Elbrus.

Sergey Korolev

In 2013 the year graduated from National Research Nuclear University MEPhI , faculty of Computing machinery, complexes, systems and networks. In the same year I got a job in MCST, where I became involved in designing distributions for "Elbrus" computer complexes.

Ilya Kosmodemiansky
Data Egret

Ilya Kosmodemiansky is an expert of Data Erget, specialist in database management systems PostgreSQL, DB2, Oracle. His area of professional interests covers issues of transactional data processing and fault-tolerant productive storage development. Ilya pays much attention to popularization of PostgreSQL technology among Russian-speaking audience. Regularly he holds lectures and workshops on effective use of PostgreSQL features in high-load web-, telecom- and related areas projects.

Ivan Bogdanov
SRC "Voshod"

Ivan graduated from the Moscow State Institute of Radio Engineering, Electronics and Automation (technical university) with a speciality in Computing machines, complexes, systems and networks. Since 2010 he has been working at SRC "Voshod" on the State system of manufacturing, issuance and control of passport and visa documents of new generation.

Practical experience of carrying out import substitution with using PostgreSQL in government information system including not only the free software, but also the Russian hardware (Elbrus servers and other).


Test environment on demand

Radoslav Glinsky
Skype (Microsoft)

Since 2011 I've been working on databases at Skype. I started as PostgreSQL developer/tester on projects for Skype contacts, authentication and integrations. Since 2013 I am in the internal Database Platform team, which provides "PostgreSQL as service" for the development teams. My job includes automation and tooling for DB deployments, replication, asynchronous DB scripting, as well as development of an internal DB warehouse.

Do you test your PostgreSQL releases prior to Production in a dedicated test environment? Are you sure that your test environment (shortly Test) is equal to Production and in an appropriate state?

In Skype we were facing multiple challenges associated with database testing:
- Simplifying complex Production architecture of thousands of PostgreSQL instances, interconnected with RPCs and replications, infrastructure servers and external DB scripts, into their Test counterparts.
- Constantly growing hardware requirements, insufficient cleanup of data generated in Test.
- Differences between Test and Production were appearing and accumulating. Recognizing and fixing them required lots of effort.

Cryptography in RDBMS

Dmitry Belyavskiy
Technical Center of Internet

Dmitry Belyavskiy is a leading specialist of the Department of Application Services of the Technical Center of Internet. He graduated from the Moscow State Institute of Electronics and Mathematics. Dmitry has more than 20-years of working experience in different IT-companies. For the last 10 years he has been working in the area of hosting and domain name registration at major Russian hosting providers and domain name registrars. Since 2010 he has been working in the Technical Center of Internet, where deals both with software development for registrars and analytical projects. He took a part actively in New gTLD program, in testing and launch of the top-level domains .TATAR and .ДЕТИ. For more than 10 years he has been working in information security area and succeeded in extension of Russian cryptography support to several libraries and products (OpenSSL, OpenVPN, XMLSec). Dmitry is an author of more than 100 publications on various aspects of information security, a regular speaker of many russian and international conferences and forums on domain name registration, hosting, infrastructure performance and security.

Real-word data often need cryptographycal protection. The presentation describes typical problems that can be solved using cryptographycal methods and the right ways to use cryptography with RDBMS. The newest solutions suggested for usage with PostgreSQL are described.


Migrating DocsVision EDMS from MS SQL to Postgres

Yury Zhukovets
Digital Design / Docsvision

Yury Zhukovets is a leading specialist of Digital Design company on databases and BI, as well as on high-load systems with large volumes of data processing. For the last 7 years he works mainly on document management systens for government structures and major commercial companies.

This talk is about migrating an electronic document management system from MS SQL to PostgreSQL 9.5 or higher as part of the import phaseout initiative. We will touch upon architecture specifics, as well as describe the problems we encountered when migrating T-SQL code to pgsql, and how we resolved them.

Learn more at


PostgreSQL JSON Roadmap

Oleg Bartunov
Postgres Pro

Oleg Bartunov is the most famous Russian PostgreSQL developer, working on it more than 20 years.

The new ISO/IEC 9075-2:2016 standard specifies JSON data and operations syntax and semantics specifics for SQL. This talk overviews the requirements of this standard, and focuses on the differences between them and the actual implementation of JSON/JSONB in PostgreSQL. Special attention will be paid to JSON Path (XPath analogue), SQL/JSON functions and our plans to make Postgres compliant with this standard.


Introduction to Greenplum MPP Database

Andreas Scherbaum

Andreas Scherbaum is working with PostgreSQL since 1997. He is involved in several PostgreSQL related community projects, member of the Board of Directors of the European PostgreSQL User Group and also wrote a PostgreSQL book (in German). Since 2011 he is working for EMC/Greenplum/Pivotal and tackles very big databases.

Overview of the architecture of Greenplum MPP (Massively Parallel Processing) database. Explain the internals of GPDB. Show how to configure and setup GPDB. How to distribute data effectively for MPP


Adaptive query optimization in PostgreSQL

Oleg Ivanov
Postgres Professional

Oleg Ivanov is R&D in Postgres Professional. He is a winner of Olympiads in Informatics of different complexity. In 2016 he got BSc degree in Lomonosov Moscow State University. His BSc thesis was devoted to applying machine learning for improving query optimization in DBMS.

Query optimization is an important problem, which solution has a great influence on DBMS performance, especially for complex queries. In this talk we consider PostgreSQL query optimizer and specifically cardinality estimation problem for correlated clauses, which is one of the most well-known drawbacks of query optimizers in general. In the talk we propose our solution for this problem which involves machine learning methods and is available for PostgreSQL 9.6 as an extension with a patch. We discuss the experimental evaluation, advantages, disadvantages, and fields of application of the proposed approach as well.


Two Years of Professional Postgres

Ivan Panchenko
Postgres Professional

Ivan Panchenko graduated from the Physical Department of Moscow State University. He got a PhD degree in Physics and Mathematics in 1997 year. Since 1996 year he has been working on development of sophisticated high performance systems for business. He has been practicing PostgreSQL since 1998 year. Ivan was a leader of content projects development in Rambler, director of IT Development in Stack Group, technical director of social network "Moy Mir". He is a developer of Rambler Media,, Higher School of Economics internet portals and other information systems for business.

A summary of what the Postgres Professional company has achieved in its two-year history:

  • Our achievements in PostgreSQL development.
  • What is the Russian Postgres Pro DBMS, and how is it related to PostgreSQL?
  • What is Postgres Pro Enterprise, and why Enterprise?
  • What about trainings and certification?


Building PostgreSQL from sources for system administrators

Dmitry Cremer
Federal State Unitary Enterprise Rossiya Segodnya

At near past Oracle DBA. Active participant of RIA Novosti and Sputnik site migration from Oracle to PostgreSQL. My main expertise is building of high availability distributed database services.

  • why we build PostgreSQL from sources?
  • choice build options
  • dependencies
  • creation system environment
  • how to customise Linux for work with PostgreSQL
  • extra soft for PostgreSQL DBA


PostgreSQL in tasks of transport analysis during the design of master plan for 2018 FIFA World Cup in Ekaterinburg

Anton Sikerin
LLC Transport Integration Group

I am a postgraduate student in "Computer Science and Engineering" at the St. Petersburg Electrotechnical University LETI . I have 3,5 years of experience in transportation planning with main field in transport systems modeling

1) About 2018 FIFA World Cup and tasks set; 2) Tools of the department of modeling and analytics; 3) Passenger traffic analysis in Koltsovo airport and on railway transport (long distance and suburb trains); 4) Population and employment analysis; 5) Routes design for client groups with automated metainformation display and assembling of the final reports; 6) Deployment of a map server to give access for the costumer to the client group routes


Practical Oracle and MS SQL Server to PostgreSQL migration with Database Adapter technology

Dmitry Grebenshchikov
Diasoft Framework

Reliable assistant on import substitution and automation. I like good solutions and strategic partnerships.


Postgresql Functions and ORM cooperation at Django example

Kamil Faritovich Islamov

11+ years at developing, migration and implementation of telecommunication billing systems based on Oracle. 4+ years at developing the business logic of distributed IOT solutions based on Postgresql. Technical sciences candidate (PhD).

We will discuss some tricks and patterns while developing web-application architecture based on ORM technology with Postgresql functions implementation with Python Django as an example. Also we will consider some ways of building the business logic inside a database with keeping Django framework opportunities and implementation of it's built-in admin panel.


PostgreSQL Diagnostics from the standpoint of a system administrator (not a DBA)

Nicholas Sivko

Co-founder of the monitoring service, formerly leader of the maintenance department.

It often happens that you already have PosgtreSQL in production, but you don’t have a DBA. To demystify the PostgreSQL database, I’ll tell you how to troubleshoot various problems while working with PosgtreSQL. We will try to understand how to answer "routine" questions of a typical system administrator: - Is everything OK with the database? - What consumes DB server resources? - What to optimize first to reduce resource consumption?


Processing 1 BILLION rows per second with PostgreSQL

Hans-Jürgen Schönig
Cybertec Schönig & Schönig GmbH

Hans-Jürgen Schönig is the CEO of Cybertec Schönig & Schönig GmbH ( He and his company have provided professional PostgreSQL training, consulting, support, and services for more than 15 years.

Database systems are increasing in size and so is the need to process huge amounts of data in real time. As commercial database vendors are bragging about their capabilities we decided to push PostgreSQL to the next level and exceed 1 billion rows per second to show what we can do with Open Source. To those who need even more: 1 billion rows is by far not the limit - a lot more is possible. Watch and see how we did it.


Complex structures without ORM

Philip Delgyado

Philip has been engaged in a wide range of IT-activities, from two-links on Visual Basic to the core SQL development. Last years he has been working mainly with high-load projects on Java. At different times he has developed the biggest in Russia bookmaking engine and the Yandex.Money service, now he is developing a specialized payment system. Philip makes talks at different conferences regularly.

I love complex knowledge domains, strong typing in applications, and 3NF, but I hate ORM. That is why I’ve been actively storing serialized structures in JSON fields (even before the JSON type was introduced). In this talk, I will tell you about some specifics of storing complex structures within DBMS fields, what problems can arise, and how to cushion the blow.


GiST speedup: patches, hacks and tweaks

Andrey Borodin
Octonica & Ural Federal University

Andrey Borodin, PhD, is an engineer of Octonica company and associated professor at Ural Federal University. He is responsible for development of analytical engine for main company’s product. He is conducting researches in the area of multidimensional indices since 2008.

This report overviews some ideas and implementations to speedup different parts of generalized search trees (GiST): 1. Intrapage indexing 2. Fractal tree technology 3. Modern algorithms for spatial indexing (RR*-tree) 4.. Possible advancements of GiST API


Dynamic Compilation of SQL Queries in PostgreSQL Using LLVM JIT

Dmitry Melnik

Dmitry has been working on development and optimization of compilers at the Institute for System Programming of Russian Academy of Sciences for more than 10 years. Before practising PostgreSQL, he had made an impact in open source projects such as GCC and LLVM compilers, JIT-compilers of JavaScript (WebKit/JSC и V8). Currently he is working on JIT-compiler for PostgreSQL.

Currently, to execute SQL queries PostgreSQL uses interpreter, which implements Volcano-style iteration model. At the same time it’s possible to get significant speedup by dynamically JIT-compiling query “on-the-fly”. In this case it’s possible to generate code that is specialized for given SQL query, and perform compiler optimizations using the information about table structure and data types that is already known at run time. This approach is especially important for complex queries, which performance is CPU-bound.

Using PostgreSQL in UIS, CoMagic Projects

Dmitry Beloborodov
UIS, CoMagic

Dmitry is a technical director and development director of UIS and CoMagic projects. Holding the position, he is a team leader of strong PostgreSQL developers. They successfully solve wide range of tasks, from the choice of hardcore and PostgreSQL fine-tuning to sophisticated analytical queries optimization.

Using PostgreSQL since 2003, we went all the way from a database of a couple of GB to a cluster of more than 5TB. At the moment, we have more than 700 tables and about 1500 stored procedures. We are ready to share with you the following: - Problems encountered at different development stages and how we resolved them. - Best practices in database administration. - Our own extension to work with several closely related databases. - Best known methods and tools that enable our several teams to work together without interference. - How we set up test equipment of different types. And, of course, we'll talk about optimization, and how we identify bottlenecks and high-load use cases.


Monitoring PostgreSQL in, with case studies

Dmitry Vagin

Dmitry graduated from the Perm State Technical University with a specialization in the Automated control systems. Since 2008 to 2015 years he has been working as a full-stack developer in a start-up team, and started his introduction in PostgreSQL, solving problems of high-loading, query optimization, developing a fault-tolerant horizontally-scalable system. Since 2015 year he has been working in Avito in the department responsible for PostgreSQL, regarding the change of the data structure, migration, storage, optimization etc.

A short talk about collecting data and monitoring database workload in Avito. Exporting metrics from stored procedures to Graphite. Collecting and visualizing pg_stat* metrics in Grafana. Case studies.


Towards 1M writes/sec: Scaling PostgreSQL using Citus MX

Marco Slot
Citus Data

Marco has an M.S. in Parallel and Distributed Computer Systems from VU University Amsterdam and a PhD on cooperative self-driving cars at Trinity College Dublin. While studying, he also worked as a software engineer on Amazon CloudFront and Route 53 for several years, driving the design of latency-driving request routing and several other key features. He now works as a software engineer on Citus Data.

Citus allows you to distribute postgres tables across many servers. It extends postgres to transparently delegate or parallelise work across a set of worker nodes, enabling you to scale out the CPU and memory available for queries.

One year ago, we began a long journey to allow Citus to scale out another dimension: write throughput. With writes being routed through a single postgres node, write throughput in Citus was ultimately bottlenecked on the CPUs of a single node. Citus MX is a new edition of Citus which allows distributed tables to be used from from any of the nodes, enabling NoSQL-like write-scalability.

temBoard a new tool for PostgreSQL

Jean-Paul Argudo

Jean-Paul is a PostgreSQL pioneer. Since 2001, he has co-founded the website, the french-speaking PostgreSQL association and the PostgreSQL Europe association. Since september 2005 : Dalibo -- Paris and Avignon, Open-Source Services and Support Company Dalibo co-founder. He uses all his might and knowledge into his own company, do mainly PostgreSQL consulting and expertise, conception and trainings.

Dalibo team produces open source tools for PostgreSQL among other things for many years now (see

This time I'll present temBoard, a new tool to remotely control PostgreSQL databases. The project is visible at

It's about monitoring, supervision, remote settings and actions... and many more features to come. The main goal is to provide a comprehensive console for PostgreSQL. It's needed by production DBAs, to daily achieve their tasks. We heard Dalibo's customers, as many of them wrote the specs :-)

Linux IO internals for PostgreSQL administrators

Ilya Kosmodemiansky
Data Egret

Ilya Kosmodemiansky is an expert of Data Erget, specialist in database management systems PostgreSQL, DB2, Oracle. His area of professional interests covers issues of transactional data processing and fault-tolerant productive storage development. Ilya pays much attention to popularization of PostgreSQL technology among Russian-speaking audience. Regularly he holds lectures and workshops on effective use of PostgreSQL features in high-load web-, telecom- and related areas projects.

Input-output (IO) performance issues have been on DBAs’ agenda since the beginning of databases. The volume of data grows rapidly and time is of an essence when one needs to get necessary data fast from the disk and, more importantly, to the disk.

For most databases it is relatively easy to find checklist of recommended Linux settings to maximize IO throughput and, in most cases, this checklist is indeed good enough. It is however essential always to understand how the optimisation of those settings actually works, especially, if you run into corner cases.

Data analysis for all. Developing BI system, working on limited and even shared hardware

Sergey Mirvoda
Octonica, UrFU

Octonica co-founder, UrFU senior lecturer. Specialized in developing high performance analytical systems for government and large commercial companies. Teaching some data and engineering related courses in UrFU. Married with two kids.

Experience we've got after 5 years of developing, deploying and improving BI system used in government. I would talk about government IT reality and our way over it. Postgres performance improvements, using of latest features, overwriting of user generated queries to help query optimizer and other tweaks and hacks to tackle limited hardware problems. These lead us to number of computer science papers and (now committed) patches to Postgres (see Andrey Borodin talks for details).


Autonomous transactions in Postgres

Michael Shurutov
Postgres Professional

Michael graduated from the Altai State Technical University after I.I. Polzunova in 1998 year with a specialization in the Software of computer technology and automated systems. Since that time he has been dealing with maintenance of different software types successfully, including commercial and open free distributed ones, as well as unusual configurations.

  1. What is an autonomous transaction?
  2. An overview of autonomous transactions in "big" DBMS: Oracle.
  3. Autonomous transaction logic in Postgres Pro.
  4. An overview of emulation methods for autonomous transactions in PostgreSQL.
  5. Comparing performance of the built-in Postgres Pro autonomous transaction mechanism and PostgreSQL emulation methods.


Accelerating queries of set data types with GIN, GiST, and custom indexing extensions

Markus Nullmeier
University of Heidelberg

Markus Nullmeier has been working in diverse fields such as low-temperature physics and image processing, as well as numerical simulation of combustion, polymerisation, and fluid dynamics. Nowadays, he gets into back-end matters of astronomical databases running on PostgreSQL, working in the context of the International Virtual Observatory Alliance. With respect to PostgreSQL, his main interests are custom data types and their indexing.

Sets are apparently a useful data type for many kinds of applications. While PostgreSQL offers no built-in set data type, sets may be emulated to some degree with its built-in array and JSONB data types. Also, acceleration of respective containment (subset) queries is readily available as a built-in feature of the GIN index type.

Starting with the above, we will then explore the performance gains enabled by custom set data types, and especially by customisation code in C ("operator classes") for the GIN and GiST index types.

Deadlock explanations

Michael Tyurin

Independent entrepreneur in the field of data technology and predictive analytics; former Chief System Architect of Avito.

< Query failed: ERROR: deadlock detected
< DETAIL: Process 17371 waits for ShareLock on transaction 102733872; Blocked by process.
< Process 10414 waits for ShareLock on transaction 102733874; Blocked by process 17371.

Such "unpleasant" messages from the server can seriously puzzle the developer. When working with locks, in particular, with transactions in general, it is necessary to take into account the features of the implementation of client libraries, which can cause the above exception.

In the short talk, the mechanics of the interaction of locks will be explained, main attention being paid to causes of deadlocks. References to the relevant documentation pages will be given. A technique of "bypassing" this problem of concurrent data access will be described and illustrated with some generalized examples from practice are shown. The talk will be interesting to database developers and administrators as well as the client-side application developers.

Distributed workflow specifics in PostrgeSQL

Philip Delgyado

Philip has been engaged in a wide range of IT-activities, from two-links on Visual Basic to the core SQL development. Last years he has been working mainly with high-load projects on Java. At different times he has developed the biggest in Russia bookmaking engine and the Yandex.Money service, now he is developing a specialized payment system. Philip makes talks at different conferences regularly.

When working with a complex business logic, you often have to implement a workflow - a sequence of several processing steps, with each step implementing a separate part of the business logic. This is usually done with specialized queues, but if there are high reliability demands, it makes sense to do everything on PostgreSQL.

I will describe the tasks that require a workflow implementation, offer a solution, compare it with other options, and tell you about the implementation traps and pitfalls.

1C:Enterprise business platform: working with PostgreSQL

Peter Gribanov

"1C:Enterprise" is the most popular in Russia/Eastern Europe business applications development framework supports PostgreSQL for 10+ years. We'll review some cases and pecularities of 1C and PostgreSQL tandem.

Using PostgreSQL in automating contextual advertising by Alytics for near real-time processing of mixed OLTP/OLAP load

Max Vikharev

Max had the luck to be a part of the team in two technological startups. In 2007 - 2012 years, dealing with project, he has passed the way from the junior C/С++ developer to the security specialist and release engineer. is the first Russian manufacturer of POS terminals, certified by PCI PED. Since 2012 year to the present time Max has been engaged in project, starting from the founder developer to the technical director. is a service of contextual advertising automation. Among others, Max has been directly involved in the service maintenance and infrastructure management for a long time.

In the data persistence layer, using PostgreSQL from the very start of development, we went all the way from a small cluster on a virtual machine to a multi-host system that provides near real-time processing of mixed OLTP/OLAP load. In this talk, I’m going to tell you about the main development stages of our analytical solution at the application and infrastructure levels, and describe the specifics of using PG that we encountered.


Experience of Greenplum deployment in Aviasales

Ainur Timerbaev

Ainur has been working in Aviasales for more than 4 years, participating in development of almost all IT-projects of the company. Before that he had worked in companies,, Ainur prefers Python language, and used to have a front-end specialization. His hobbies are roller skates and circuitry (Arduino).


Semiautomatic database schema conversion in MoySklad

Oleg Alekseev

CTO and cofounder MoySklad online service.

MoySklad online service includes important component - semiautomatic database conversion framework. Its history and supported conversions are subject of this speech.


ZSON, PostgreSQL extension for transparent JSONB compression

Alexander Alekseev

Alexander is a programmer, OSS- and FP-enthusiast, contributor to PostgreSQL, FreeBSD and other open source projects. He is an author of the blog and one of regular speakers of podcast. Since 2015 year Alexander has been working at Postgres Professional company, dealing with PostgreSQL core development.

ZSON is a PostgreSQL extension for transparent JSONB compression. Compression is based on a shared dictionary of strings most frequently used in specific JSONB documents (not only keys, but also values, array elements, etc). In some cases ZSON can save half of your disk space and give you about 10% more TPS.


10+ PB efficient storage and processing via PostgreSQL. Novel paradigm of lean data infrastructure for Data-Driven Enterprise

Alexey Valerevich Mergasov
NOXA Data Lab

Founder and CTO of NoXA Data Lab (company specialized on homogeneous data infrastructures with 10 PB and more capacity). Before NoXA Alexey developed data processing solutions in Informatica Corp. He has more than 12 years of experience of data infrastructure development for big enterprises.

Alexey will present technical details and share hands-on experience of extreme data normalization application for data infrastructure with exceptional parameters design and development. Extreme normalization-based data infrastructures has the following competitive advantages in comparison with market leaders: - Real-time data processing for 10 PB of data and more - 2-6 times better overall performance - 100% data consistency through total data landscape - Almost linear scalability - 4-10 lower cost of ownership - etc The abovementioned approach has been successfully utilized out of Russian market in telecommunication, retail, fin-tech, manufacturing (Industry 4.0, industrial IoT), and government institutions.