title

text

Vladimir  Sitnikov
Vladimir Sitnikov Pgjdbc, JMeter committer
18:00 05 February
22 мин

PostgreSQL and JDBC: striving for high performance

Common Java wisdom is to use PreparedStatements and Batch DML in order to achieve top performance. It turns out one cannot just blindly follow the best practices. In order to get high throughput, you need to understand the specifics of the database in question, and the content of the data.

In the talk we will see how proper usage of PostgreSQL protocol enables high performance operation while fetching and storing the data. We will see how trivial application and/or JDBC driver code changes can result in dramatic performance improvements. We will examine how server-side prepared statements should be activated, and discuss pitfalls of using server-prepared statements.

Слайды

Видео

Другие доклады

  • Tatsuo  Ishii
    Tatsuo Ishii

    PostgreSQL clusters using streaming replication and pgpool-II

    The talk is about PostgreSQL clusters using streaming replication and pgpool-II, which are quite popular in Japan. Plus, the next version of pgpool-II will be released this winter, so the talk will be about what's new in the version.

  • Valentine Gogichashvili
    Valentine Gogichashvili Zalando

    Data Integration in the World of Microservices

    Since its launch in 2008, Zalando has grown with tremendous speed. The road from startup to multinational corporation has been full of challenges, especially for Zalando's technology team. Distributed across Berlin, Helsinki, Dublin and Dortmund — and nearly 900 professionals strong — Zalando Technology still plans to expand by adding 1,000 more developers through the end of 2016. This rapid growth has showed us that we need to be very flexible about developing processes and organizational structures, so we can scale and experiment. In March 2015, our team adopted Radical Agility: a tech management strategy that emphasizes Autonomy, Purpose, and Mastery, with trust as the glue holding it all together. To make autonomy possible, teams can now choose their own technology stacks for the products they own. Microservices, speaking with each other using RESTful APIs, promise to minimize the costs of integration between autonomous teams. Isolated AWS accounts, run on top of our own open-source Platform as a Service (called STUPS.io), give each autonomous team enough hardware to experiment and introduce new features without breaking our entire system.

    One small issue with having microservices isolated in their individual AWS accounts: Our teams keep local data for themselves. In this environment, building an ETL process for data analyses, or integrating data from different services, becomes quite challenging. PostgreSQL's new logical replication features, however, now make it possible to stream all the data changes from the isolated databases to the data integration system so that it can collect this data, represent it in different forms, and prepare it for analysis.

    In this talk, I will discuss Zalando's open-source data collection prototype, which uses PostgreSQL's logical replication streaming capabilities to collect data from various PostgreSQL databases and recreate it for different formats and systems (Data Lake, Operational Data Store, KPI calculation systems, automatic process monitoring). The audience will come away with new ideas for how to use Postgres streaming replication in a microservices environment.

  • Jean-Paul Argudo
    Jean-Paul Argudo Dalibo

    Migration to PostgreSQL : reasons... and consequences

    The talk will be articulated around all the traditional arguments to "how chose PostgreSQL over other choices in the database domain"... But also, and that's quite new in the comunity, what are the consequences of this choice. Because the PostgreSQL adoption brings adoption of other things like Linux, but also, Open Source thinking, the fast pace of PostgreSQL will command new methods of validation the company must adapt to... etc.

  • Michael  Paquier
    Michael Paquier

    PostgreSQL and backups

    A backup is something that no Postgres deployments should go without as it gives the insurance to get back a deployment on its feet should a disaster strike.

    In this talk we will discuss why backups are essential in any sane PostgreSQL deployments (this seems obvious) and what are the different options available to define and set up a good backup strategy. On top of that is discussed how the future of backups would need to be handled, particularly regarding differential backups that gain in popularity among users with large deployments.