Home PostgreSQL Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C

Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C

by admin

Objectives and requirements for 1C Accounting testing

The main purpose of the testing is to compare the behavior of the system 1C in two different databases, all other things being equal.I.e.the configuration of 1C databases and the initial occupancy of the data should be the same in each test.
The main parameters to be obtained by testing :

  • Time to complete each test (taken by 1C Development department)
  • DBMS and server environment load during test execution is taken by DBMS administrators and server environment load is taken by system administrators

Testing of 1C system should be performed taking into account the client-server architecture, so it is necessary to produce a full emulation of the user or multiple users in the system with the processing of information entry in the interface and saving this information in the database. In this case, it is necessary that a large amount of periodic information was spread over a long period of time to create the totals in the accumulation registers.
To perform testing, developed an algorithm in the form of a script of script testing, for the configuration of 1C Accounting 3.0, which performs sequential entry of test data into the system 1C. The script allows you to specify various settings on the actions performed and the amount of test data. A detailed description below in the text.

Description of the settings and characteristics of the environments to be tested

We at Fortis decided to double-check the results, including with the famous Gilev’s test
We were also spurred into testing by some publications on the results of performance changes when switching from MS SQL Server to PostgreSQL. Such as : 1C Battle : PostgreSQL 9, 10 vs MS SQL 2016
So, here is the infrastructure for testing :

1C MS SQL PostgreSQL
Number of CPU cores 8 8 8
RAM capacity 16 32 32
OS MS Windows Server 2012R2 Standard MS Windows Server 2012R2 Standard CentOS 7.6.1810
Bit rate x64 x64 x64
1C platform 8.3.13.1865
DBMS version 13.0.5264.1 10.5 (4.8.5.20150623)

Servers for MS SQL and PostgreSQL were virtual and ran alternately for the desired test. 1C was on a separate server.
Details Hypervisor specification :
Model : Supermicro SYS-6028R-TRT
CPU: Intel® Xeon® CPU E5-2630 v3 @2.40GHz (2 sockes * 16 CPU HT = 32CPU)
RAM: 212 GB
Axis : VMWare ESXi 6.5
PowerProfile: Performance
Hypervisor disk subsystem :
Controller : Adaptec 6805, Cache size : 512MB
Volume: RAID 10, 5.7 TB
Stripe-size: 1024 KB
Write-cache: on
Read-cache: off
Wheels : 6 pcs HGST HUS726T6TAL,
Sector-Size: 512 Bytes
Write Cache: on
PostgreSQL was configured as follows :

  1. postgresql.conf:
    The basic configuration was done by the calculator – pgconfigurator.cybertec.at , the parameters huge_pages, checkpoint_timeout, max_wal_size, min_wal_size, random_page_cost were changed based on the information obtained from the sources mentioned at the end of the publication. The value of the temp_buffers parameter was increased based on the suggestion that 1C actively uses temporary tables :
    listen_addresses = '*'max_connections = 1000#Data cache allocated RAM size. String manipulation takes place mostly in this memory section. On systems with 32GB of RAM it is recommended to allocate approximately 25% of the total memory size.shared_buffers = 9GB#Use of large memory pages (Linux kernel tuning - vm.nr_hugepages).huge_pages = on#Memory limit for temporary tables per session.temp_buffers = 256MB#Memory limit per ORDER BY, DISTINCT, merge joins, join, hash-based aggregation, hash-based processing of IN subqueries.#Based on the assumption that 1C makes complex large queries (profile "Mostly complicated real-time SQL queries" in the calculator). Probably, you should decrease it to 64MB.work_mem = 128MB#Memory limit for service operations. VACUUM, index creation, etc.maintenance_work_mem = 512MB#In conjunction with kernel settings (vm.dirty_background_bytes, vm.dirty_bytes), these parameters allow to eliminate IO bursts during CHECKPOINT.checkpoint_timeout = 30minmax_wal_size = 3GBmin_wal_size = 512MBcheckpoint_completion_target = 0.9seq_page_cost = 1#Settings for the request scheduler. Default value is 4. For RAID10 it is recommended to decrease it.random_page_cost = 2.5#To give the scheduler an approximate potential size of all the memory occupied by postgres, including the pages in PageCache.effective_cache_size = 22GB

  2. Core, Os parameters :
    The settings are specified in the profile file format for the tuned daemon:
    [sysctl]#Parameters which set the volume of dirty pages (PageCache), at reaching which the kernel should start background/forced writing of these pages to disk.#Default volume is set in percent(10, 30), which on modern systems with a large amount of RAM leads to bursts of load on the I/O system.#Important for optimizing CHECKPOINT performance and eliminating I/O bursts.#The absolute values specified apply for use with a RAID controller that has a write-back cache of 512MB.vm.dirty_background_bytes = 67108864vm.dirty_bytes = 536870912#Use SWAP as little as possible. Don't turn it off at all, to minimize OOM.vm.swappiness = 1#The scheduler assumes that the process is using the CPU cache for the configured period.# Increasing this parameter reduces the number of migrations of processes from one CPU to another.#This parameter has a noticeable effect on performance.kernel.sched_migration_cost_ns = 5000000#Disables CPU grouping of processes based on the session.#This parameter should be set to 0 for servers. It has a noticeable effect on performance.kernel.sched_autogroup_enabled = 0#Memory allocation for large pages. This parameter has a noticeable effect on performance.The way it is calculated is described in the documentation - https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-HUGE-PAGESvm.nr_hugepages = 5000[vm]#Transparent large pages are disabled. Since the DBMS does not use homogeneous long memory segments, it is recommended to disable this option. Especially since normal large pages are enabled.transparent_hugepages=never#Power-saving CPU options. They hardly make sense in a virtual machine, but are required on an iron server.[cpu]force_latency=1governor=performanceenergy_perf_bias=performancemin_perf_pct=100

  3. File System :
    #creation of fs :#stride and stripe_width were calculated for the above RAID 10 of 6 disks with a stripe size of 1024kbmkfs.ext4 -E stride=256, stripe_width=768 /dev/sdb#mount options :/dev/sdb /var/lib/pgsql ext4 noatime, nodiratime, data=ordered, barrier=0, errors=remount-ro 0 2#noatime, nodiratime - disable access time logging for files and directories#data=ordered - Logging is enabled for metadata only. Metadata is written after the data#barrier=0 - Barrier enables consistent logging of FS data. On battery powered RAID controllers the barrier can be disabled.

All the contents of the postgresql.conf file:

# -----------------------------# PostgreSQL configuration file# -----------------------------## This file consists of lines of the form:## name = value## (The "=" is optional.) Whitespace may be used. Comments are introduced with# "#" anywhere on a line. The complete list of parameter names and allowed# values can be found in the PostgreSQL documentation.## The commented-out settings shown in this file represent the default values.# Re-commenting a setting is NOT sufficient to revert it to the default value;# you need to reload the server.## This file is read on server startup and when the server receives a SIGHUP# signal. If you edit the file on a running system, you have to SIGHUP the# server for the changes to take effect, run "pg_ctl reload", or execute# "SELECT pg_reload_conf()". Some parameters, which are marked below, # require a server shutdown and restart to take effect.## Any parameter can also be given as a command-line option to the server, e.g., # "postgres -c log_connections=on". Some parameters can be changed at run time# with the "SET" SQL command.## Memory units: kB = kilobytes Time units: ms = milliseconds# MB = megabytes s = seconds# GB = gigabytes min = minutes# TB = terabytes h = hours# d = days#------------------------------------------------------------------------------# FILE LOCATIONS#------------------------------------------------------------------------------# The default values of these variables are driven from the -D command-line# option or PGDATA environment variable, represented here as ConfigDir.#data_directory = 'ConfigDir' # use data in another directory# (change requires restart)#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file# (change requires restart)#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file# (change requires restart)# If external_pid_file is not explicitly set, no extra PID file is written.#external_pid_file = '' # write an extra PID file# (change requires restart)#------------------------------------------------------------------------------# CONNECTIONS AND AUTHENTICATION#------------------------------------------------------------------------------# - Connection Settings -listen_addresses = '*' # what IP address(es) to listen on;# comma-separated list of addresses;# defaults to 'localhost'; use '*' for all# (change requires restart)#port = 5432 # (change requires restart)max_connections = 1000 # (change requires restart)#superuser_reserved_connections = 3 # (change requires restart)#unix_socket_directories = '/var/run/postgresql, /tmp' # comma-separated list of directories# (change requires restart)#unix_socket_group = '' # (change requires restart)#unix_socket_permissions = 0777 # begin with 0 to use octal notation# (change requires restart)#bonjour = off # advertise server via Bonjour# (change requires restart)#bonjour_name = '' # defaults to the computer name# (change requires restart)# - Security and Authentication -#authentication_timeout = 1min # 1s-600sssl = off#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers#ssl_prefer_server_ciphers = on#ssl_ecdh_curve = 'prime256v1'#ssl_dh_params_file = ''#ssl_cert_file = 'server.crt'#ssl_key_file = 'server.key'#ssl_ca_file = ''#ssl_crl_file = '' #test#password_encryption = md5 # md5 or scram-sha-256#db_user_namespace = offrow_security = off# GSSAPI using Kerberos#krb_server_keyfile = ''#krb_caseins_users = off# - TCP Keepalives -# see "man 7 tcp" for details#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;# 0 selects the system default#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;# 0 selects the system default#tcp_keepalives_count = 0 # TCP_KEEPCNT;# 0 selects the system default#------------------------------------------------------------------------------# RESOURCE USAGE (except WAL)#------------------------------------------------------------------------------# - Memory -shared_buffers = 9GB # min 128kB# (change requires restart)huge_pages = on # on, off, or try# (change requires restart)temp_buffers = 256MB # min 800kB#max_prepared_transactions = 0 # zero disables the feature# (change requires restart)# Caution: it is not advisable to set max_prepared_transactions nonzero unless# you actively intend to use prepared transactions.#work_mem = 128MB # min 64kBmaintenance_work_mem = 512MB # min 1MB#replacement_sort_tuples = 150000 # limits use of replacement selection sort#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem#max_stack_depth = 2MB # min 100kBdynamic_shared_memory_type = posix # the default is the first option# supported by the operating system:# posix# sysv# windows# mmap# use none to disable dynamic shared memory# (change requires restart)# - Disk -#temp_file_limit = -1 # limits per-process temp file space# in kB, or -1 for no limit# - Kernel Resource Usage -max_files_per_process = 10000 # min 25# (change requires restart)shared_preload_libraries = 'online_analyze, plantuner' # (change requires restart)# - Cost-Based Vacuum Delay -#vacuum_cost_delay = 0 # 0-100 milliseconds#vacuum_cost_page_hit = 1 # 0-10000 credits#vacuum_cost_page_miss = 10 # 0-10000 credits#vacuum_cost_page_dirty = 20 # 0-10000 credits#vacuum_cost_limit = 200 # 1-10000 credits# - Background Writer -bgwriter_delay = 20ms # 10-10000ms between roundsbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/roundbgwriter_lru_multiplier = 4.0 # 0-10.0 multiplier on buffers scanned/roundbgwriter_flush_after = 0 # measured in pages, 0 disables# - Asynchronous Behavior -effective_io_concurrency = 3 # 1-1000; 0 disables prefetchingmax_worker_processes = 8 # (change requires restart)max_parallel_workers_per_gather = 4 # taken from max_parallel_workersmax_parallel_workers = 8 # maximum number of max_worker_processes that# can be used in parallel queries#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate# (change requires restart)#backend_flush_after = 0 # measured in pages, 0 disables#------------------------------------------------------------------------------# WRITE AHEAD LOG#------------------------------------------------------------------------------# - Settings -wal_level = minimal # minimal, replica, or logical# (change requires restart)#fsync = on # flush data to disk for crash safety# (turning this off can cause# unrecoverable data corruption)#synchronous_commit = on # synchronization level;# off, local, remote_write, remote_apply, or onwal_sync_method = fdatasync # the default is the first option# supported by the operating system:# open_datasync# fdatasync (default on Linux)# fsync# fsync_writethrough# open_sync#wal_sync_method = open_datasync#full_page_writes = on # recover from partial page writeswal_compression = on # enable compression of full-page writes#wal_log_hints = off # also do full page writes of non-critical updates# (change requires restart)wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers# (change requires restart)wal_writer_delay = 200ms # 1-10000 millisecondswal_writer_flush_after = 1MB # measured in pages, 0 disablescommit_delay = 1000 # range 0-100000, in microseconds#commit_siblings = 5 # range 1-1000# - Checkpoints -checkpoint_timeout = 30min # range 30s-1dmax_wal_size = 3GBmin_wal_size = 512MBcheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0#checkpoint_flush_after = 256kB # measured in pages, 0 disables#checkpoint_warning = 30s # 0 disables# - Archiving -#archive_mode = off # enables archiving; off, on, or always# (change requires restart)#archive_command = '' # command to use to archive a logfile segment# placeholders: %p = path of file to archive# %f = file name only# e.g. 'test ! -f /mnt/server/archivedir/%f cp %p /mnt/server/archivedir/%f'#archive_timeout = 0 # force a logfile segment switch after this# number of seconds; 0 disables#------------------------------------------------------------------------------# REPLICATION#------------------------------------------------------------------------------# - Sending Server(s) -# Set these on the master and on any standby that will send replication data.max_wal_senders = 0 # max number of walsender processes# (change requires restart)#wal_keep_segments = 130 # in logfile segments, 16MB each; 0 disables#wal_sender_timeout = 60s # in milliseconds; 0 disables#max_replication_slots = 10 # max number of replication slots# (change requires restart)#track_commit_timestamp = off # collect timestamp of transaction commit# (change requires restart)# - Master Server -# These settings are ignored on a standby server.#synchronous_standby_names = '' # standby servers that provide sync rep# method to choose sync standbys, number of sync standbys, # and comma-separated list of application_name# from standby(s); '*' = all#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed# - Standby Servers -# These settings are ignored on a master server.#hot_standby = on # "off" disallows queries during recovery# (change requires restart)#max_standby_archive_delay = 30s # max delay before canceling queries# when reading WAL from archive;# -1 allows indefinite delay#max_standby_streaming_delay = 30s # max delay before canceling queries# when reading streaming WAL;# -1 allows indefinite delay#wal_receiver_status_interval = 10s # send replies at least this often# 0 disables#hot_standby_feedback = off # send info from standby to prevent# query conflicts#wal_receiver_timeout = 60s # time that receiver waits for# communication from master# in milliseconds; 0 disables#wal_retrieve_retry_interval = 5s # time to wait before retrying to# retrieve WAL after a failed attempt# - Subscribers -# These settings are ignored on a publisher.#max_logical_replication_workers = 4 # taken from max_worker_processes# (change requires restart)#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers#------------------------------------------------------------------------------# QUERY TUNING#------------------------------------------------------------------------------# - Planner Method Configuration -#enable_bitmapscan = on#enable_hashagg = on#enable_hashjoin = on#enable_indexscan = on#enable_indexonlyscan = on#enable_material = on#enable_mergejoin = on#enable_nestloop = on#enable_seqscan = on#enable_sort = on#enable_tidscan = on# - Planner Cost Constants -seq_page_cost = 1 # measured on an arbitrary scalerandom_page_cost = 2.5 # same scale as above#cpu_tuple_cost = 0.01 # same scale as above#cpu_index_tuple_cost = 0.005 # same scale as above#cpu_operator_cost = 0.0025 # same scale as above#parallel_tuple_cost = 0.1 # same scale as above#parallel_setup_cost = 1000.0 # same scale as above#min_parallel_table_scan_size = 8MB#min_parallel_index_scan_size = 512kBeffective_cache_size = 22GB# - Genetic Query Optimizer -#geqo = on#geqo_threshold = 12#geqo_effort = 5 # range 1-10#geqo_pool_size = 0 # selects default based on effort#geqo_generations = 0 # selects default based on effort#geqo_selection_bias = 2.0 # range 1.5-2.0#geqo_seed = 0.0 # range 0.0-1.0# - Other Planner Options -#default_statistics_target = 100 # range 1-10000#constraint_exclusion = partition # on, off, or partition#cursor_tuple_fraction = 0.1 # range 0.0-1.0from_collapse_limit = 20join_collapse_limit = 20 # 1 disables collapsing of explicit# JOIN clauses#force_parallel_mode = off#------------------------------------------------------------------------------# ERROR REPORTING AND LOGGING#------------------------------------------------------------------------------# - Where to Log -log_destination = 'stderr' # Valid values are combinations of# stderr, csvlog, syslog, and eventlog, # depending on platform. csvlog# requires logging_collector to be on.# This is used when logging to stderr:logging_collector = on # Enable capturing of stderr and csvlog# into log files. Required to be on for# csvlogs.# (change requires restart)# These are only used if logging_collector is on:log_directory = 'pg_log' # directory where log files are written, # can be absolute or relative to PGDATAlog_filename = 'postgresql-%a.log' # log file name pattern, # can include strftime() escapes#log_file_mode = 0600 # creation mode for log files, # begin with 0 to use octal notationlog_truncate_on_rotation = on # If on, an existing log file with the# same name as the new log file will be# truncated rather than appended to.# But such truncation only occurs on# time-driven rotation, not on restarts# or size-driven rotation. Default is# off, meaning append to existing files# in all cases.log_rotation_age = 1d # Automatic rotation of logfiles will# happen after that time. 0 disables.log_rotation_size = 0 # Automatic rotation of logfiles will# happen after that much log output.# 0 disables.# These are relevant when logging to syslog:#syslog_facility = 'LOCAL0'#syslog_ident = 'postgres'#syslog_sequence_numbers = on#syslog_split_messages = on# This is only relevant when logging to eventlog (win32):# (change requires restart)#event_source = 'PostgreSQL'# - When to Log -#client_min_messages = notice # values in order of decreasing detail:# debug5# debug4# debug3# debug2# debug1# log# notice# warning# error#log_min_messages = warning # values in order of decreasing detail:# debug5# debug4# debug3# debug2# debug1# info# notice# warning# error# log# fatal# panic#log_min_error_statement = error # values in order of decreasing detail:# debug5# debug4# debug3# debug2# debug1# info# notice# warning# error# log# fatal# panic (effectively off)#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements# and their durations, > 0 logs only# statements running at least this number# of milliseconds# - What to Log -#debug_print_parse = off#debug_print_rewritten = off#debug_print_plan = off#debug_pretty_print = onlog_checkpoints = onlog_connections = onlog_disconnections = onlog_duration = on#log_error_verbosity = default # terse, default, or verbose messages#log_hostname = offlog_line_prefix = '< %m > ' # special values:# %a = application name# %u = user name# %d = database name# %r = remote host and port# %h = remote host# %p = process ID# %t = timestamp without milliseconds# %m = timestamp with milliseconds# %n = timestamp with milliseconds (as a Unix epoch)# %i = command tag# %e = SQL state# %c = session ID# %l = session line number# %s = session start timestamp# %v = virtual transaction ID# %x = transaction ID (0 if none)# %q = stop here in non-session# processes# %% = '%'# e.g. '<%u%%%d> 'log_lock_waits = on # log lock waits > = deadlock_timeoutlog_statement = 'all' # none, ddl, mod, all#log_replication_commands = offlog_temp_files = 0 # log temporary files equal or larger# than the specified size in kilobytes;# -1 disables, 0 logs all temp fileslog_timezone = 'W-SU'# - Process Title -#cluster_name = '' # added to process titles if nonempty# (change requires restart)#update_process_title = on#------------------------------------------------------------------------------# RUNTIME STATISTICS#------------------------------------------------------------------------------# - Query/Index Statistics Collector -#track_activities = on#track_counts = on#track_io_timing = on#track_functions = none # none, pl, all#track_activity_query_size = 1024 # (change requires restart)#stats_temp_directory = 'pg_stat_tmp'# - Statistics Monitoring -#log_parser_stats = off#log_planner_stats = off#log_executor_stats = off#log_statement_stats = off#------------------------------------------------------------------------------# AUTOVACUUM PARAMETERS#------------------------------------------------------------------------------autovacuum = on # Enable autovacuum subprocess? 'on'# requires track_counts to also be on.log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and# their durations, > 0 logs only# actions running at least this number# of milliseconds.autovacuum_max_workers = 4 # max number of autovacuum subprocesses# (change requires restart)#autovacuum_naptime = 20s # time between autovacuum runs#autovacuum_vacuum_threshold = 50 # min number of row updates before# vacuum#autovacuum_analyze_threshold = 50 # min number of row updates before# analyze#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum# (change requires restart)#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age# before forced vacuum# (change requires restart)#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for# autovacuum, in milliseconds;# -1 means use vacuum_cost_delay#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for# autovacuum, -1 means use# vacuum_cost_limit#------------------------------------------------------------------------------# CLIENT CONNECTION DEFAULTS#------------------------------------------------------------------------------# - Statement Behavior -#search_path = '"$user", public' # schema names#default_tablespace = '' # a tablespace name, '' uses the default#temp_tablespaces = '' # a list of tablespace names, '' uses# only default tablespace#check_function_bodies = on#default_transaction_isolation = 'read committed'#default_transaction_read_only = off#default_transaction_deferrable = off#session_replication_role = 'origin'#statement_timeout = 0 # in milliseconds, 0 is disabled#lock_timeout = 0 # in milliseconds, 0 is disabled#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled#vacuum_freeze_min_age = 50000000#vacuum_freeze_table_age = 150000000#vacuum_multixact_freeze_min_age = 5000000#vacuum_multixact_freeze_table_age = 150000000#bytea_output = 'hex' # hex, escape#xmlbinary = 'base64'#xmloption = 'content'#gin_fuzzy_search_limit = 0#gin_pending_list_limit = 4MB# - Locale and Formatting -datestyle = 'iso, dmy'#intervalstyle = 'postgres'timezone = 'W-SU'#timezone_abbreviations = 'Default' # Select the set of available time zone# abbreviations. Currently, there are# Default# Australia (historical usage)# India# You can create your own file in# share/timezonesets/.#extra_float_digits = 0 # min -15, max 3#client_encoding = sql_ascii # actually, defaults to database# encoding# These settings are initialized by initdb, but they can be changed.lc_messages = 'ru_RU.UTF-8' # locale for system error message# stringslc_monetary = 'ru_RU.UTF-8' # locale for monetary formattinglc_numeric = 'ru_RU.UTF-8' # locale for number formattinglc_time = 'ru_RU.UTF-8' # locale for time formatting# default configuration for text searchdefault_text_search_config = 'pg_catalog.russian'# - Other Defaults -#dynamic_library_path = '$libdir'#local_preload_libraries = ''#session_preload_libraries = ''#------------------------------------------------------------------------------# LOCK MANAGEMENT#------------------------------------------------------------------------------#deadlock_timeout = 1smax_locks_per_transaction = 256 # min 10# (change requires restart)#max_pred_locks_per_transaction = 64 # min 10# (change requires restart)#max_pred_locks_per_relation = -2 # negative values mean# (max_pred_locks_per_transaction# / -max_pred_locks_per_relation) - 1#max_pred_locks_per_page = 2 # min 0#------------------------------------------------------------------------------# VERSION/PLATFORM COMPATIBILITY#------------------------------------------------------------------------------# - Previous PostgreSQL Versions -#array_nulls = on#backslash_quote = safe_encoding # on, off, or safe_encoding#default_with_oids = offescape_string_warning = off#lo_compat_privileges = off#operator_precedence_warning = off#quote_all_identifiers = offstandard_conforming_strings = off#synchronize_seqscans = on# - Other Platforms and Clients -#transform_null_equals = off#------------------------------------------------------------------------------# ERROR HANDLING#------------------------------------------------------------------------------#exit_on_error = off # terminate session on any error?#restart_after_crash = on # reinitialize after backend crash?#------------------------------------------------------------------------------# CONFIG FILE INCLUDES#------------------------------------------------------------------------------# These options allow settings to be loaded from files other than the# default postgresql.conf.#include_dir = 'conf.d' # include files ending in '.conf' from# directory 'conf.d'#include_if_exists = 'exists.conf' # include file only if it exists#include = 'special.conf' # include file#------------------------------------------------------------------------------# CUSTOMIZED OPTIONS#------------------------------------------------------------------------------online_analyze.threshold = 50online_analyze.scale_factor = 0.1online_analyze.enable = ononline_analyze.verbose = offonline_analyze.local_tracking = ononline_analyze.min_interval = 10000online_analyze.table_type = 'temporary'online_analyze.verbose='off'plantuner.fix_empty_table='on'

MS SQL was configured as follows :
Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
and
Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
1C cluster settings left the standard :
Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
and
Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
There was no antivirus program on the servers and nothing third-party was installed.
For MS SQL, the database tempdb was moved to a separate logical drive. However, the data files and transaction log files for the databases were located on the same logical drive (i.e., no separation of data files and transaction logs on separate logical drives was done).
Disk indexing in Windows, where MS SQL Server was located, was disabled on all logical disks (as is common in most Prodov environments).
Description of the basic automated testing scenario algorithm The main expected testing period is 1 year, during which documents and reference information are created for each day according to the specified parameters.
For each day of execution the input and output blocks are started :

  1. Block 1 "SPR_PTU" – "Receipt of goods and services"
    • The "Counterparties" directory opens
    • A new element of the "Counterparties" directory is created with the "Supplier" view
    • A new element of the directory "Contracts" is created with the view "With vendor" for a new counteragent
    • The "Nomenclature" directory opens
    • A set of "Nomenclature" directory elements with the "Goods" view is created
    • A set of items in the "Nomenclature" directory with the "Service" view is created
    • The list of "Goods and services receipt" documents opens
    • A new document "Receipt of goods and services" is created, where the tabular parts "Goods" and "Services" are filled with the created data sets
    • Report "41 account card" for the current month is generated (if the interval of additional generation is specified)
    • Block 2 "SPR_RTU" – "Sales of goods and services"
      • The "Counterparties" directory opens
      • A new element of the directory "Counterparties" with the "Buyer" view is created
      • A new directory item "Contracts" with the view "With Buyer" is created for a new counteragent
      • The list of "Sales of goods and services" documents opens
      • A new document "Sales of goods and services" is created, where the tabular parts "Goods" and "Services" are filled in according to the specified parameters from the previously created data
      • Report "Account Sheet 41" for the current month is generated (if the interval of additional generation is specified)
      • Report "Account Sheet 41" for the current month is generated

      At the end of each month in which documents were created, input and output blocks are executed :

      1. Report "Chart of account 41" from the beginning of the year to the end of the month is generated
      2. A "Balance Sheet" report is generated from the beginning of the year till the end of month
      3. A routine procedure "Closing of the month" is executed

      The result gives information about the time of the test in hours, minutes, seconds, and milliseconds.
      Basic features of the test script :

      1. Ability to disable/enable individual units
      2. Possibility to specify the total number of documents for each block
      3. Possibility to specify the number of documents for each block per day
      4. Ability to specify the number of goods and services within documents
      5. Ability to specify lists of quantity and price values for a record. Used to create different sets of values in documents

      Basic test plan for each of the bases :

      1. "First Test". A small number of documents with simple tables are created under one user, "month closures" are formed
        • Expected turnaround time is 20 minutes. Filling for 1 month. Data : 50 documents "PTU", 50 documents "RTU", 100 items "Nomenclature", 50 items "Suppliers" + "Contract", 50 items "Buyers" + "Contract", 2 operations "Month Closing". The documents contain 1 product and 1 service
        • "Second Test. A substantial number of documents are created under one user with the tables filled, the month closures are formed
          • Expected turnaround time is 50-60 minutes. Filling for 3 months. Data : 90 documents "PTU", 90 documents "RTU", 540 items "Nomenclature", 90 items "Suppliers" + "Contract", 90 items "Buyers" + "Contract", 3 operations "Closing of the month". There are 3 goods and 3 services in the documents
          • "Third Test". Under two users runs the script simultaneously. A significant number of documents are created with the filling of tables. The final test execution time is considered the maximum
            • Expected turnaround time is 40-60 minutes. Filling for 2 months. Data : 50 documents "PTU", 50 documents "RTU", 300 elements "Nomenclature", 50 elements "Suppliers" + "Contract", 50 elements "Buyers" + "Contract". There are 3 goods and 3 services in the documents

            Additional test plan for each of the bases :

            1. Changing the database structure, checking the restructuring time of the database tables :
              • Modification of the Treaty Directory
              • Changing of directory Counteragents
              • Change of "Sales of goods and services" document
              • Re-issue of "Receipt of Goods and Services" and "Sales of Goods and Services" documents for the specified period
              • Downloading database into a 1C file format "*.dt" and loading it back
              • Executing the "Monthly Closing" routine procedure for one of the old periods

              Results

              And now for the most interesting results on MS SQL Server:
              Details first run :
              Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
              second run :
              Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
              third run :
              Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
              After the optimizations for PostgreSQL given in here and here as well as after performing a number of OS and file system optimizations, which have been described above, we got the following results :
              first run :
              Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
              second run :
              Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
              third run :
              Performance study of MS SQLServer Developer 2016and PostgreSQL10.5 for 1C
              Gilev test :

              Indicator MS SQL PostgreSQL % difference (improvement) in PostgreSQL DBMS relative to MS SQL DBMS
              Gilev’s synthetic test (mean value)

              14, 41 12, 55 -14, 82
              Max.speed 1 flow (average) 32 404.67 KB/s 33 472.67 KB/s +3, 3
              Max.speed (average) 51, 744 KB/s 86 323.67 KB/s +66, 83
              Recommended number of users (average) 42 70 +66, 67

              As you can see from the results, in the overall synthetic test, PostgreSQL DBMS lost to MS SQL by an average of 14.82% in terms of performance However, on the last two metrics, PostgreSQL performed significantly better than MS SQL.
              Specialized tests for 1C Accounting :

              Test Description MS SQL, sec PostgreSQL, sec % difference (improvement) in PostgreSQL DBMS relative to MS SQL DBMS
              Script – "First Test" 1056, 45 1064 -0, 7
              Script – "The Second Test" 3230, 8 3236, 6 -0, 2
              Script – "The third test" 1707, 45 1738, 8 -1, 8
              Script – "The Third Test" (4 threads) 1859, 1 1864, 9 -0, 3
              Changing the database structure and restructuring 30 22 +26, 7
              Conduct of PTU and PTU documents for the period 01.01.2018 to 31.12.2018 138, 5 164, 5 -15, 8
              Conducting all PTU and RTU documents in the database 316 397 -20, 4
              Downloading the database into a *.dt file 87 87 0
              Loading the database from a *.dt file 201 207 -2, 9
              Performing the Closing Month Procedure for December 2018. 78 64, 5 +17, 3

              As you can see from the results, 1C Accounting works almost equally on MS SQL and PostgreSQL with the above settings.
              In both cases, the RDBMS was stable.
              Of course maybe more fine tuning is needed on the DBMS side as well as on the OS and file system side. Everything was done the way the publications broadcasted, which said that there would be a significant performance gain or about the same when switching from MS SQL to PostgreSQL. Moreover, in this testing, a number of optimizations were made to the OS and file system itself for CentOS, as described above.
              It should be noted that Gilev’s test was run many times for PostgreSQL-given the best results. For MS SQL, Gilev’s test was run 3 times, because further optimization for MS SQL was not engaged. All subsequent attempts were to bring the elephant to MS SQL.
              After reaching the optimal difference according to Gilev’s synthetic test between MS SQL and PostgreSQL, the specialized tests for 1C Accounting, described above, were conducted.
              The overall conclusion is that, despite the significant drawdown in performance on the Gilev synthetic test, the PostgreSQL DBMS relative to MS SQL, with proper configuration, the data is superior, 1C Accounting can be installed both on MS SQL and PostgreSQL

              Remarks

              It should be noted right away that this analysis was done only to compare the performance of 1C in different DBMS.
              This analysis and conclusion is correct only for 1C Accounting under the conditions and software versions described above. Based on the analysis obtained, it is impossible to accurately conclude what will happen with other settings and software versions, as well as with a different 1C configuration.
              However, the results of Gilev’s test suggests that on all configurations of 1C version 8.3 and newer with proper settings, the maximum drawdown in performance is likely to be no more than 15% for the PostgreSQL database relative to MS SQL database. It is also worth considering that any detailed testing for an accurate comparison takes considerable time and resources. Based on this, a more likely assumption can be made that 1C version 8.3 and newer can be migrated from MS SQL to PostgreSQL with a maximum loss of performance up to 15%. Objective obstacles to the transition are not identified, because these 15% may not appear, and in case of their manifestation, it is enough to buy a little more powerful equipment if necessary.
              It is also important to note that the tested databases were small, i.e. much smaller than 100 GB in size, and the maximum number of simultaneously active threads was 4. This means that for large databases with a size significantly larger than 100 GB (e.g. about 1 TB), as well as for databases with intensive accesses (tens or hundreds of simultaneously active threads) these results may not be correct.
              For a more objective analysis, it will be useful in the future to compare the released MS SQL Server 2019 Developer and PostgreSQL 12 installed on the same CentOS OS, as well as when MS SQL is on the latest version of Windows Server OS. No one currently installs PostgreSQL on Windows since the performance degradation of PostgreSQL DBMS will be very substantial.
              Of course, Gilev’s test speaks generally about performance and not just for 1C. However, at this point it is too early to say that MS SQL will always be significantly better than PostgreSQL, because there are not enough facts. To confirm or deny this statement, you need to do some other tests. For example, for .NET you need to write both atomic actions, and comprehensive tests, run them many times and in different conditions, fix the runtime and take the average value. After that compare these values. This will be an objective analysis.
              At the moment we are not ready to make such an analysis, but in the future we will probably do it. Then we will write in more detail about which operations PostgreSQL is better than MS SQL and by what percentage, and where MS SQL is better than PostgreSQL and by what percentage.
              Also, our test did not apply the optimization methods for MS SQL, which are described in here Perhaps in the this article just forgot to turn off Windows disk indexing.
              When comparing the two DBMS must remember about another important point: PostgreSQL free and open source DBMS, while MS SQL is paid and has a closed source code.
              Now for the Gilev test itself. Outside of the tests, traces were taken for the synthetic test (the first test) and for all other tests. On the first test are mostly queries for atomic operations (insert, update, delete and read), as well as complex (with accesses to multiple tables, as well as creating, modifying and deleting tables in the database) with different amounts of processing data. Therefore, Gilev’s synthetic test can be considered quite objective for comparing the average unified performance of the two environments (including DBMS) relative to each other. That is, the absolute values themselves do not say anything, but their ratio of two different environments is quite objective.
              About the rest of Gilev’s tests. From the trace you can see that the maximum number of threads was 7, but the conclusion about the number of users was more than 50. Also on the query is not quite clear how other indicators are calculated. Therefore the rest of the tests are not objective and are highly variable and approximate. More accurate values will give only specialized tests that take into account not only the specifics of the system itself, but also the work of the users themselves.

              Acknowledgements

              • performed the 1C setup and ran the Gillev tests, and contributed significantly to the creation of this publication :
              • Roman Butz – team leader of 1C
              • Alexander Gryaznov – 1C programmer
            2. Fortis colleagues who have made significant contributions to tuning the optimization of CentOS, PostgreSQL, etc., but wished to remain incognito
            3. Also special thanks to uaggster and BP1988 For consulting on some points about MS SQL and Windows.

              Afterword

              A curious analysis was also made in this article
              What were your results and how did you do the testing?

              Sources

              You may also like