title

text

Arthur Zakirov
Arthur Zakirov Postgres Professional
18:00 05 February
22 мин

Using pg_variables as temporary tables

PostgreSQL provides possibility to create temporary tables. Though a temporary table is accessible only to a single session and is removed at the end of the session, all information about it is stored in the system catalogs of PostgreSQL. This is related to several issues, which make it difficult or impossible to use temporary tables in some cases. There are attempts to solve this feature, including in our company. But they have not yet succeeded, mainly because of the PostgreSQL engine. In the talk I want to tell about simple and small pg_variables extension. It allows you to create table variables along with scalar ones. I will tell how it can replace temporary tables, what advantages and disadvantages it has.

Слайды

Видео

Другие доклады

  • Dmitry Yuhtimovsky
    Dmitry Yuhtimovsky Gilev.ru
    22 мин

    Magic tricks followed by exposure (1C+PG)

    Magic tricks followed by exposure (1C+PG):

    • Focus number one. How to convince the accounting department to buy a new server.
    • Focus number two. How to show that MS SQL is faster than PostgreSQL.
    • Focus number three. How to show that PostgreSQL is faster than MS SQL Server.

  • Kamil Islamov
    Kamil Islamov Stickeroid Ai
    22 мин

    CTE Queries Usage for Business Logic

    Wide usage of Common Table Expression queries considered as a core paradigm for implementing the Business Logic for high loaded web applications development based on PostgreSQL functions.

  • Nikolay Samokhvalov
    Nikolay Samokhvalov Nombox LLC
    45 мин

    Enterprise-level approach to PostgreSQL tuning: database experiments

    Shared_buffers = 25% – is it too much or not enough? Or it's the right value?

    How can we ensure that this – pretty much outdated – recommendation suit well our needs?

    It is time to start apply enterprise-level approach to tuning postgresql.conf. Not using various blind auto-tuners or advices from old articles and blog posts, but based on the following two aspects:

    1. comprehensive database experiments, conducted in automated fashion, repeated multiple times in conditions as close to production as possible, and
    2. deep understanding of DBMS and OS internals.

    Using Nancy CLI (https://gitlab.com/postgres.ai/nancy) we will consider a concrete example: infamous shared_buffers, under various circumstances, in various projects. We will try to figure out, how to optimize this settings for given infrastructure, database, and workload.

  • Maksim Viharev
    Maksim Viharev Alytics
    45 мин

    GreenHouseSQL as a scalable analytics system for postgresql, greenplum and clickhouse

    At pgconf’17 I talked about our analytics systems based on PostgreSQL. Afterwards we looked at hadoop, s3, presto, vertica, and other frights. Finally we stopped to suffer nonsense and just completed PostgreSQL with ready Greenplum and Clickhouse. As a result, we achieved amazing performance, fast migration, easy maintenance, reliability and horizontal scalability. We enabled to recover the system after fault in two commands, decreased infrastructure costs and expanded functionality due to ANSI SQL, MPP and In-memory. All within the open-source and full SQL paradigm. We called the product GreenHouseSQL, which is our inner whole cycle data platform. In the talk we will show the beauty of solution internals, explain the advantages and flaws, tips and tricks of starting with Greenplum, as well as why do we need Clickhouse, what is left to PostgreSQL, and eventually how does it all work.