title

text

March 15 – 17 , 2017

Postrelease

  • more than
    0 participants
  • 0 speakers
  • 0
    minutes of conversation
  • 63 talks
  • offline
    format

Talks

Talks archive

PgConf.Russia 2017
  • Maksim Viharev
    Maksim Viharev Alytics

    In the data persistence layer, using PostgreSQL from the very start of development, we went all the way from a small cluster on a virtual machine to a multi-host system that provides near real-time processing of mixed OLTP/OLAP load. In this talk, I’m going to tell you about the main development stages of our analytical solution at the application and infrastructure levels, and describe the specifics of using PG that we encountered.

    VIDEO

  • Dmitry Beloborodov
    Dmitry Beloborodov UIS, CoMagic

    Using PostgreSQL since 2003, we went all the way from a database of a couple of GB to a cluster of more than 5TB. At the moment, we have more than 700 tables and about 1500 stored procedures. We are ready to share with you the following: - Problems encountered at different development stages and how we resolved them. - Best practices in database administration. - Our own extension to work with several closely related databases. - Best known methods and tools that enable our several teams to work together without interference. - How we set up test equipment of different types. And, of course, we'll talk about optimization, and how we identify bottlenecks and high-load use cases.

    VIDEO

  • Илья Космодемьянский
    Илья Космодемьянский Data Egret

    Input-output (IO) performance issues have been on DBAs’ agenda since the beginning of databases. The volume of data grows rapidly and time is of an essence when one needs to get necessary data fast from the disk and, more importantly, to the disk.

    For most databases it is relatively easy to find checklist of recommended Linux settings to maximize IO throughput and, in most cases, this checklist is indeed good enough. It is however essential always to understand how the optimisation of those settings actually works, especially, if you run into corner cases.

  • Vadim Yatsenko
    Vadim Yatsenko Progress Soft

    The talk will describe how we have implemented storage of large tables (+1 billion rows per day). The project exists in production 2 years. The total amount of data - 300 Tb (25 PostgreSQL servers * 2 Data Center). I'll tell about mistakes in organization of large tables storage in the initial phase of the project, and how these mistakes were corrected. I'll also talk about how to organize the data rotation and archiving. I voiced questions about what we were missing in PostgreSQL 9.4 out of what appeared in the 9.5 and 9.6. And also, what new features we are waiting for new releases of PostgreSQL.

All talks