title

text

February 03 – 05 , 2020

PgConf.Russia 2020

PgConf.Russia 2020

PGConf.Russia is a leading Russian PostgreSQL international conference, annually taking together more than 700 PostgreSQL professionals from Russia and other countries — core and software developers, DBAs and IT-managers. The 3-day program includes training workshops presented by leading PostgreSQL experts, more than 40 talks, panel discussions and a lightning talk session.

Thems

  • PostgreSQL at the cutting edge of technology: big data, internet of things, blockchain
  • New features in PostgreSQL and around: PostgreSQL ecosystem development
  • PostgreSQL in business software applications: system architecture, migration issues and operating experience
  • Integration of PostgreSQL to 1C, GIS and other software application systems.
  • more than
    0 participants
  • 0 speakers
  • 0
    minutes of conversation
  • 62 talks
  • offline
    format

Talks

Talks archive

PgConf.Russia 2020
  • Alexey Fadeev
    Alexey Fadeev Sibedge

    Recently, I was working on a project where graphQL was used for sending requests to its .NET Core backend, but this was not a good idea. The point is, a graphQL query is a hierarchical structure with a dynamic set of fields. It’s difficult to perform such requests via a statically-typed programming language and a relational database as suggested by the tools available. So, I came up with the idea of using the plv8 extension and perform graphQL queries right on the database side. It took me about two hours to develop a working prototype that could perform the same queries as the software under development for more than one month! Then various improvements have been made and I want to introduce them all. If you are thinking of using graphQL instead of REST, my speech could be most useful and could help you to save a lot of time.

  • Andrey Zubkov
    Andrey Zubkov ООО "Пармалогика"

    Any DBA needs some kind of tool for historical workload analyse. Assume once at morning your monitoring team will report of sudden performance degradation at 2-3 a.m., and now you need to investigate this issue. What activities was most resource consuming within that hour? There are several tools for solving this problem, and I'll talk about one very easy and convenient tool - pg_profile. It need only a postgres database and a cron-like tool to run, and it will generate a workload profile report for your database as you need it. Ths report will be a good start point for further investigation.

  • Иван Чувашов
    Иван Чувашов Calltouch LLC.

    When migrating data from one DBMS to another, the question arises: choose a third-party tool or to program the migration yourself? Companies, trying to grow competencies within themselves, choose the second option. And they come across the "invention of their own bicycles". However, the market has powerful free data migration tools. One such tool is Pentaho Data Integration, part of the Pentaho Community Edition. The report will discuss the use of this package for data migration between Oracle and PostgreSQL. Particular attention will be paid to the problems with using this tool, and to the tasks of testing for the completeness and integrity of migrated data.

    Small video illustration:

  • Sangwook (Shawn) Kim
    Sangwook (Shawn) Kim Apposha

    Cloud storage has some unique characteristics compared to traditional storage mainly because it is virtualized and controlled by software. One example is that AWS EBS shows higher throughput with larger I/O size up to 256 KiB without hurting latency. Hence, a user can get only about 4 MiB/sec with 1,000 IOPS EBS volume if the I/O request size is 4 KiB, whereas a user can get about 250 MiB/sec if the I/O request size is 256 KiB. This is because EBS consumes one I/O in a given IOPS budget for every I/O request regardless of the I/O size (up to 256 KiB). Unfortunately, PostgreSQL cannot exploit the full potential of cloud storage because PostgreSQL has designed without considering the unique characteristics of cloud storage.

    In this talk, I will introduce the AppOS extension that improves the throughput of a write-intensive workload by 10x by transparently making PostgreSQL cloud storage-native. AppOS works like a storage driver that efficiently exploits the characteristics of cloud storage, such as I/O size dependency to storage throughput and latency, atomic write support in cloud block storage, and fast, but non-durable local SSDs. To do this, AppOS comprises a Linux-compatible file I/O stack including virtual file system, page cache, block I/O layer, cloud storage driver. On top of the file I/O stack, syscall module supports registering pre- and post-handler for file I/O-related system calls in order to transparently work without modifying PostgreSQL codes.

    I will focus on presenting key use cases and performance results of the AppOS extension after explaining the internals. Specifically, I will show the performance results of OLTP and some batch workloads using standard benchmarking tools like pgbench and sysbench. I will also present performance results and implications on multiple clouds including AWS, GCP, and Azure.

All talks

Partners

PgConf.Russia 2020

Organizational

Informational

Technical

Partner